So it seems a stronger prior for the entries of the B matrix is needed when either the network is larger or there are more groups. Recall for 3 groups, I used a uniform prior and things looked fine.
And a prior of Beta(1, 3) and Beta(10,1) worked for the diagonal and off-diagonal entries when N=50 and G=4.
Now, it seems we need priors of Beta(1,10) and Beta(30, 1) for the other 3 situations.
But this is enough to start thinking about true mixed membership models (i.e. allowing theta to vary a bit more). And get back to coding my HMMSBMs!
Update: HMMSBM without hierarchical structure is working!
******************
Regardling HLSMs, my adjust my tune function is problematic but it is ridiculous to have to adjust every single tuning parameter by hand. We are still fine tuning it so that it will have at least some utility. More time now = less time later (hopefully).
Thesis Meeting at 1:30: Talk about identifiability issue/priors vs truncated distributions. Talk about hopefully getting the HLSM out by next week for the JEBS paper.
A couple of things to think about/do:
(1) Do I really need a strong prior or can I just use a flat prior on the B’s. Check convergence using the posterior (i.e. evaluate the likelihood and prior at each step and compare to actual value).
(2) Tuning parameters for the HLSM – it might not be as simple as changing everything. Try marginalizing what I change – change one parameter at a time and see how that affects everything else.
(3) Continue coding the HMMSBM and see if I can get some data.
(4) Once I have determined that the HLSM fits my simulated data, I will move on the Pitts & Spillane data.
Deadlines: HMMSBM paper in December 2011, NSF grant deadline is Jan 2012.