Hello there

My name’s Norm Ovenseri and I studied Statistics with a good mix of Computer Science because it is my belief that technology and statistics envelopes the world whether you like it or not. :}

Most Recent Post

Interesting Things in Software (9/10/2016)

Decided to start off an “Interesting things in Software” post to start logging things that I found interesting to read about.

Sonification of Algorithms

It is all about assigning a sound effect (essentially) to your algorithms. For a sorting algorithm, you could assign a sound effect when it makes a data access or write. The sound effect’s tone/pitch can be modified depending on data values (see youtube link below).

This is an interesting topic and idea because it is something that is weird to think about. First question is what the heck is this and why would I want to do it. Patterns is why. Every algorithm has a sound signature. Read the Sonification of Monitoring below about the idea. My take away from the post is that you can figure out what the system is doing by the signature and if the system makes other sounds that what is expected then there may be a problem. Applying this idea at scale (w.r.t. complexity and size) makes more sense than trying to apply it to smaller algorithms.

Look at Example S18.3: Sonnet in Sonification of Data link below. Off by one error detection by sound.

Imagine being able to detect a bug in your algorithm because the sounds it was making was not harmonized. Very odd way to think about finding issues in my opinion.


Sorting Sounds: https://www.youtube.com/watch?v=kPRA0W1kECg

Sonification of Monitoring: http://muratbuffalo.blogspot.com/2016/09/sonification-for-monitoring-and.html

Sonification of Data: http://sonification.de/handbook/index.php/chapters/chapter18/

The Sound of Voldemort: http://charap.co/sound-of-voldemort/

(Extra) Sorting Algorithm Sounds Playlist: https://www.youtube.com/playlist?list=PLZh3kxyHrVp_AcOanN_jpuQbcMVdXbqei&src_vid=kPRA0W1kECg

Dynamic Time Warping (DTW)

I’ve been reading about the idea of figuring out how to assign a similarity score between two time series that is transformation invariant and it has led me to DTW as well as other challengers to DTW. The idea of DTW is to compare every point in a time series (TS1) against a reference time series (TS2) with a distance function (Euclidean Distance is one) and then find the path that optimizes a cost function (minimum distance/maximum similarity).

I like this idea but I think you can see the inherent inefficiency… do we really have to compare every value in TS1 to TS2? Couldn’t we select a window of values to compare instead? Sure this has been explored and that is the whole idea of Subsequence DTW. Other researchers took it farther and decided to explore and implement the idea of multilevel (resolution based) DTW to take the run time down to linear time. They’ve even implemented it for all to reproduce the results.


DTW: https://en.wikipedia.org/wiki/Dynamic_time_warping

FastDTW: http://cs.fit.edu/~pkc/papers/tdm04.pdf