During this week with some good amount of effort I was able to get ahead from my proposed time-line by writing the sample and generate sample methods. My this week’s time was spend in writing the sample method and reading Handbook of Markov Chain Monte Carlo[1]. My experience of reading book proved rather depressing as I wasn’t able to even partially gasp the content about Monte Carlo given in introductory chapter but seeing HamiltonianMCda
class returning samples did boost my morale.
Earlier I wasn’t able to test my implementation of find reasonable epsilon method. There was a bug in it, and it took me a hard time to find it(I treated a single valued 2d numpy.matrix similar to a floating value thus the bug). Also I found that numpy.array are more flexible than numpy.matrix and even scipy recommends numpy.array over numpy.matrix [2] (This is conditional, check post for details). During this week meeting we(community) also decided to use numpy.array instead of numpy.matrix in accordance to the post[2].
As sampling method was functional for the first time I was able to actually see the performance of HMC sampling algorithm. Though theoretically I knew that step-size and number of steps effect performance of algorithm, and on actual run difference was clearly visible. Sometimes for un-tuned values of step-size (epsilon) and number of steps (calculated using Lambda, see the algorithm 5[3]), the algorithm took ages for returning mere 5-6 samples. During adaptation of epsilon in dual averaging algorithm sometimes the epsilon value was decreased by huge exponent which in turn increased the number of steps by the same(exponent), causing algorithm to run for a great deal of time. Not only the difference was visible due to these parameters, the sample quality was also effected by algorithm we choose for discretization. Modified Euler performance was really awful compared to leapfrog. Results generated by leapfrog method were really good.
I also wrote tests for the HMC algorithm. I was thinking of using mock as it was already being used in tests for other sampling methods in pgmpy, but my mentor recommended to generate samples and apply inference on them. This also took a great deal of time, as I was to choose a model and hand tune the parameters so that tests don’t become too much slow.
This week also me and my mentor weren’t able to settle upon the parameterization of model as discussed in previous post, and we don’t have any leads in that matter as of now. For the next week I’ll clean up the code and try think more harder on the mentioned matter.
References and Links
[1] Handbook of Markov Chain Monte Carlo