Distributed Vision Processing - Part 1
It’s been a while since i voiced any opinion or tips on the interwebs. Lets just say i’ve been on a magical journey of figuring out what the hell i’m going to do in the common years.
After all the smoke cleared, i saw the path. The path back to the past that is, computer vision processing, my concentration in college.
What vision projects?
I’m currently working on two vision processing related projects, one has to do with tv, the other has to do with traffic. Both projects share a similar platform for processing live image streams at 24 fps. It was fun to see my prototype work by just spawning some processes on my dev laptop and seeing everything automate itself, this is how programmers entertain themselves. Unfortunately, that euphoric feeling was cut short when i started to think of how this tiny thing would scale. I was off to my 6 feet whiteboard to discuss scalability with the team. My team consists of Me, the programmer, Me the devops, and Me the R & D guy, suprisingly we never agree on anything.Ok, scalability - It’s all fun and games until you exhaust all your server resources.
These vision projects require alot of bandwidth and storage. Our R & D guy decided that it would be too expensive to run these projects in the cloud since bandwidth costs in the country where it would be deployed are ridiculous. For example, each vision sensor node would generate 25 FPS with size of 14Kb per frame, in an hour we would generate ~1.2Gb of data, and we don’t have enough money for this project to make it rain yet. We would need to build a server farm that can process 24FPS * any number of TV or traffic video capture nodes. Each node in the farm or sub farm would need to be aware of other nodes. The idea was to create a system that allows a CV programmer to create services that operate on any frame captured, e.g a service that counts the number of heads in a frame and publishes/saves the result to the cluster/farm for any other service that may use this info. Each service can then be scaled automatically depending on load.
These reasons led to the creation of a private vision processing cluster based on open source technologies like Riak, Consul, Gnatsd, Openframeworks and WeedFs.
I intend on explaining how i used each service to create my distributed CV server in my upcoming blog posts.
Thanks for reading
I intend on explaining how i used each service to create my distributed CV server in my upcoming blog posts.
Thanks for reading