Blog Archive

Long live knowledge! Click on a headline to read the teaser.

by Yubo 'Paul' Yang › Marching Cubes
Marching cubes is a Classic algorithm for isosurface extraction. It utilizes efficient table lookups to extract a 2D isosurface from 3D volumetric data. The isosurface is represented in terms of a collection of triangles, which are easily rendered by graphics hardware. Although marching cubes is a rather dated algorithm, it demonstrates much of the concepts behind modern rendering approaches. Read More ›

by Eli Chertkov › Phase transitions in random satisfiability problems
This talk is about an application of the statistical physics of phase transitions to the analysis of a class of NP-complete computational problems. Read More ›

by Kiel Williams › Relaxation Methods in the Solution of Partial Differential Equations
How can we use a simple conceptual approach to numerically solve a nontrivial PDE? Read More ›

by Ben Pranther › Hash Function
Google achieved the first collision of SHA-1 Read More ›

by Alex Munoz › Pagerank
PageRank is an integral part of Google’s search engine horsepower, but the mathematics behind it can describe the properties of general directed networks by weighing the importance of singular nodes. Read More ›

by Xiongjie Yu › Neural Network on a Tensor Train
Redundancy in the weight matrix of a neural network can be much reduced with a tensor network formulation. More than 99% compression rate can be achieved while maintaing accuracy. Tensor train representation of a neural network is compact. This formalism may allow neural networks to be trained on mobile devices. Read More ›

by Benjamin Villalonga Correa › Audio Compression
Problems in audio compression form just a subset of the general problem of signal compression, and general techniques can well be applied to solve them. However, it is possible to benefit greatly from being aware of the very particular way in which the human brain perceives and interprets sound, being able to optimize compression techniques to keep only information that is relevant to human perception. In this presentation, I focus on speech compression, and more particularly on an implementation using a Linear Predicting Model (LPM). The LPM provides a very efficient way of reconstructing a signal from a very small set of compressed data (up to 95% of data can be neglected), generating a sythesized speech that keeps the original phonemes and the quality of the voice of the speaker, who can be recognized easily. This technique has been used in telephony applications. Read More ›

by Will Wheeler › Reservoir Computing in the Time Domain
Recognize a million words per second. Read More ›

by Yubo 'Paul' Yang › Gibbs Sampling
The basic Gibbs sampler samples a joint probability distribution one variable at a time. Each random variable is sampled from its full conditional probability distribution with all other variables fixed. Independent variables can be sampled simultaneously, making the Gibbs sampler ideal for the restricted Boltzmann machine. Read More ›

by Eli Chertkov › Belief Propagation
Belief Propagation is a powerful message-passing algorithm at the heart of some of the most effective error-correcting coding schemes currently in use. Read More ›

by Brian Busemeyer › Cellular Automaton
Cellular automata are simplified simulations that attempt to capture interesting large-scale behavior, even as the microscopics are artificial, on the basis of a few well-chosen local rules. Read More ›

by Juha Tiihonen › Automatic focusing of cameras
Autofocus is an essential feature in any optical imaging devices, such as cameras or microscopes. We go through the most common approaches for finding the optimal focus distance and demonstrate focus classification with several digital focus functions. Read More ›

by Will Wheeler › Feedback Control
Rule the world with linear algebra. Read More ›

by Pratik Lahiri › Hidden Markov Models
When ground water evaporates, dust in the clouds and low temperature gives a chance of snow. Weather and many other complex systems have many internal components (water, dust, temperature) that interact with each other to generate an observable effect (snow). The hidden Markov model is the simplest dynamic Bayesian network that models the internal of a complex system as a Markov chain. Read More ›

by Benjamin Villalonga Correa › Pitch Correction
Pitch correction is widely used in the music industry both in real time and in post-production situations. Depending on the original quality of an artist, pitch correcting techniques can range from allowing an already good performance to become excellent (as far as tuning is concerned) to making a terrible one sound robotic and surprisingly late-90-ish. From an engineering point of view, the difficulty of these algorithms boils down to being able to manipulate independently time scales and frequencies in a signal. One algorithm that achieves this is the Phase Vocoder, which is discussed in this presentation. Its range of application goes beyond pitch correcting purposes, so more insight in the kind of problems that it tackles will also be given. An example of the commercial software Auto-Tune is also linked. Read More ›

by Dot Silverman › Crocheting Hypberbolic Surfaces
Simple stitch algorithms can help us visualize and physically understand hyperbolic shapes. Read More ›

by Ben Prather › Shor's Algorithm
An informal description of Shor's algorithm for factorization by a quantum computer. Read More ›

by Kiel Williams › Numerical ODE Solutions
Comparison of Common Algorithms for the Solution of ODE Problems. Read More ›

by Matt Zhang › Predict Seizure with EEG
Seizure can be extremely dangerous in the wrong situations and a lot less harmful with the right preparations. With advances in non-invasive brain monitoring technology such as electroencephalogram (EEG), we may be able to predict seizure an hour before it happens. Read More ›

by Yubo "Paul" Yang › Particle Swarm Optimization
Particle Swarm Optimization (PSO) is a nature inspired optimization algorithm. PSO mimics the behavior of a flock of birds looking for food source. Read More ›

by Dima Kochkov › Fast Fourier Transform
O(N log(N)) computation of a discrete fourier transform, fast integer multiplication Read More ›

by Eli Chertkov › Error Correcting Codes
Error correcting codes allow full signal recovery after noise contamination. Read More ›

by Brian Busemeyer › Evolutionary algorithms
Evolutionary or genetic algorithms are optimization algorithms that are inspired by the evolution of life. They work by identifying the good parts of different suboptimal solutions, and attempting to combine them to make newer better solutions. Read More ›

by Ben Prather › The Barnes-Hut Algorithm
The Barnes-Hut algorithm approximately solves the N-body problem using only O(N log(N)) time Read More ›

by Jyoti Aneja › Grovers Algorithm
Grovers algortihm is a quantum search algorithm that sclaes as O(sqrt(N)) Read More ›

by Nicholas Laracuente › Theory of Computation
by Will Wheeler › Finding Eigenvalues: Arnoldi and QR
The Arnoldi algorithm reduces a large matrix to a smaller one to simplify finding the largest eigenvalue(s) by the QR method, which is the standard way of finding eigenvalues. Read More ›

by Matt Zhang › Adaptive Boosting (AdaBoost)
AdaBoost is a systemmatic way to construct a complicated model (strong learner) by combining many copies of a simple model (weak learner). Each simple model is fit to a reweighted data set, where unexplained data have higher weights. Read More ›

by Alex Munoz › Fractal Compression
Fractal compression uses self-similarity in images and functions to reduce the redundant content. This technique takes redundancies and stores them as affine transformations with a set of coordinates. Read More ›

by Benjamin Villalonga Correa › Introduction to Supervised Machine Learning
With Supervised Machine Learning techniques we can train a model to be able to recognize and classify inputs such as handritten digits, human faces, objects in a picture or sports teams with high chances of winning a game. One of the most used strategies for doing so is the use of artificial neural networks. Read More ›

by Brian Busemeyer › Compressed sensing
Compressed sensing is a way of extracting a full signal out of a sparse sampling. It's only requirement is that the signal has a sparse representation in some basis, which is actually true for most interesting signals that we encounter. Read More ›

by Dima Kochkov › Boltzmann Machines
Boltzmann Machines represent a class of Neural Networks that can be used for unsupervised learning. Inspired by ideas from physics and neuroscience these nets allow a simple, genuine learning rule. The learning is based on minimization of Kullback–Leibler divergence between learned probability distribution and the dataset. Read More ›

by Yubo "Paul" Yang › Automatic Differentiation
Automatic Differentiation exploits the fact that any algebraic function implemented on a computer can be compiled into a long list of elementary operations and elementary functions. Using this observation, exact differentiation can be carried out efficiently by exploiting the chain rule. Read More ›

by Eli Chertkov › Kalman Filter
Kalman filter is an algorithm that filters out the noise in noisy measurement to extract signal. Explicit form of the signal must be known. Read More ›