Saturday 26 May 2012

Simple application of Pressure imbalance

Mostly everyone has seen or been to a magic show at least once during the natural course of lifetimes.
I have also been to few  when I was kid.
The magic trick showed then was ' the magician filled a bottle of water with small holes in the cap
for the water to drain out. he then called me on to the stage asked me to look at the bottles and to tell the audience if the holes were authentic or if they were sealed with some transparent glue.
After conformation he gave me the bottle and asked me to turn it upside down and well obviously
water fell out thanks to gravity.
He then took to the bottle in his hands, whispered a few chants, gave the bottle a few shakes and walla
the drizzling water stopped as though gravity wasn't working.'
The trick ended and i have been thinking about it for a quite some time and finally
it dawned
it wasn't the absence of gravity but the mere pressure of air and water which was stopping the water from falling out.
The mumbo-jumbo was just for the amazement of the audience the real trick was in the shakes and wobbles he squeezed the plastic bottle while shaking
And as we very well know squeezing causing pressure and the pressure was sufficient to create an imbalance between air inside and outside the bottle.
Thus stopping the flow of water from the bottle.
Knowing is way more fun then the trick itself.....

Sunday 13 May 2012

Max-Lloyd algorithm

Here I am talking about the algorithm used for finding the regions in which we are to quantize the given set of values and the values themselves.
As per the Max-Lloyd algo
1) we choose a given set of points as the quantization points
2) we then proceed to find out the region boundary points as perpendicular bisectors so as to minimize the error from the respective set of points
3) we then find the expected value of the point chosen to lie in the interval of interest and call this the new quantization value and so on reiterate the algo
what this basically does is we chose some points and move onto a better approximation of the points, however the algo by itself does not have any clue of the shape of the region so achieved considering two or more dimensional vector space for the points chosen ie group pairs of points.
Hence, even if better MSE(mean square error) can be achieved by changing some of the parameters of the shape of the particular area in consideration then may be the MSE can be reduced further more
For eg if we consider a two dimensional vector space then the algo would most likely settle on a square grid as the best value for MSE. However, a circle would have the least MSE but that will cause problems of coverage and hence hexagons would be the next best option.
So taking all this into account why not modify one the steps of the Lloyd-Max algo
It can be as follows:
Steps 1) , 2) and 3) remain the same during the 1st iteration.
During the next iteration add one more step after step 2) in the algo
Step 3) would now become
Find the MSE on the bisectors at fixed distances. The fixed distance would depend on the spacing of the two dimensional vectors. So when we obtain this we can realize how the MSE changes and modify the shape of the area.

But this would lead to a circle and some areas will remain untouched.
To solve this problem
I have asked to take measurements at fixed distance and not extremely close

Now the previous step 3 will become step 4
And the reiteration can take place

If increasing the quantization levels is permitted then
We can assign  a few levels for the spaces that will be left considering that circles would leave some areas untouched.
However the probability that those spaces are actually utilised by the points will be very less due to the very nature in which the algo works

Thursday 10 May 2012

LZ-77

LZ-77
I am not talking of a car model or something.
Its the dictionary based coding scheme in digital communications designed by lempel and ziv in 1977
They had an amazingly cool idea. The whole problem prior to this was that when we assume a source having certain distribution say Markov process but then  the source does not produce anything even near that particular process or does not produce states that agree with the general concepts of a Markov process.
Then the whole model goes for a toss. And sometimes instead of coding we end up with something that's even more sad as compared to the original source data stream.
So these guys said, Owell lets not assume what the source distribution instead lets build it up based on statistical model as when the source generates the output.
Hence, when an infinite sequence of source symbols have occurred and the source stabilises to some distribution without major changes, we have the statistics collected so far,  and then we can model based on those statistics itself generating one hell of a code.
The only problem is the fact that we are not assuming anything in the beginning. This is the source of one problem or another. In this case more specifically, we have no clue of the distribution till a lot of statistics have been collected. So, we transmit everything till that is available which kind of is clumsy
when the source changes its statistics or when we start something.
So, how can we overcome this problem??
Solutions may already exist which I am unaware of at this point.
But i feel why not just assume some model and code that way till we generate sufficient data
and then let the LZ algo kick in.
The model we chose might not be an exact representation but even if it does a decent job of coding
a lot is achieved in this bandwidth needy world.
I haven't quite thought of which models we can assume in the beginning.
More on that next time when i have some more idea of what we are exactly dealing with