Prof. Corso moved to the Electrical Engineering and Computer Science department at the University of Michigan in the 8/2014. He continues his work and research group in high-level computer vision at the intersection of perception, semantics/language, and robotics. Unless you are looking for something specific, historically, here, you probably would rather go to his new page.
Navigation

News
Calendar
 
Main Course Material
Course Outline
Course Work
Additional Information
Course Bibliography
 
Full Teaching List

CSE 672 Bayesian Vision
SUNY at Buffalo
Syllabus for Fall 2010


Instructor: Jason Corso (UBIT: jcorso)
Course Webpage: http://www.cse.buffalo.edu/~jcorso/t/2010F_672
Syllabus: http://www.cse.buffalo.edu/~jcorso/t/2010F_672/files/syllabus.pdf
Meeting Times:MWF 1-2 MW 1-2:30
Location: 102 Clemens 242 Bell
Office Hours: Instructor: M 4-5 W 2:30-3:30 and F 2-3.

News

  • Assignment 2 is assigned and due 27 Oct in class. [Description]
  • Assignment 1 is assigned and due 29 Sept in class. [Description] [Data]
  • Example projects from the Fall 08 offering of this course are available in my CSE network folder. Also, I realized that my earlier stated statistic of the number of projects from the Fall 08 offering that eventually become published works was off by two. It was really 4 of 8 and not 2 of 8.
  • We are able to switch the course to meet two days a weeks. Starting 9/6, we will meet just MW 1-2:30 in Bell 242. We will stay in Clemens 102 for Wednesdday 9/1 1-2 just in case someone does not get this. On Friday 9/3, we will meet in Bell 242 from 1-2.
  • Please fill out the doodle scheduler to see if we can rearrange the class times to meet twice a week rather than thrice as the course material is better suited to fewer longer lectures...
  • 8/30 -- First day of class.

Calendar

The calendar is given in weeks and will be populated as the semester proceeds based on the course outline and our progress. There are no slides for this course (lectures are given on the board) and you should cross-reference reading materials with the outline below and the bibliography I handed out with the syllabus.

August 30
Introduction. Statistics of Natural Images.
Sept. 6 (Monday Labor Day)
Statistics of Natural Images.
Sept. 13
Descriptive Models 1: MRFs/Gibbs Fields.
Sept. 20
(No Class, out of town)
Sept. 27
Descriptive Models 2: Early MRFs and Applications of MRFs.
Oct. 4
Inference 1: Deterministic Methods
Oct. 11
Inference 2: Stochastic Methods
Oct. 18
Parameter Estimation in MRFs
Oct. 25
 
Nov. 1
 
Nov. 8
(Nov. 10 CVPR Deadline)
Nov. 15
 
Nov. 22 Thanksgiving Week.
(No Class, out of town, use it for projects developments)
Nov. 29
 
Dec. 6
 
Friday Dec. 10 is the last day of classes.

Main Course Material

Course Overview: The course takes an in-depth look at various Bayesian methods in computer and medical vision. Through the language of Bayesian inference, the course will present a coherent view of the approaches to various key problems such as detecting objects in images, segmenting object boundaries, and recognizing objects. The course is roughly partitioned into two parts: modeling and inference. In the first half, it will cover both classical models such as weak membrane models and Markov random fields as well as more recent models such as conditional random fields, latent Dirichlet allocation, and topic models. In the second half, it will focus on inference algorithms. Methods include PDE boundary evolution algorithms such as region competition, discrete optimization methods such as graph-cuts and graph-shifts, and stochastic optimization methods such as data-driven Markov chain Monte Carlo. An emphasis will be placed on both the theoretical aspects of this field as well as the practical application of the models and inference algorithms.

Course Project: Each student will be required to implement a course project that is either a direct implementation of a method discussed during the semester or new research in Bayesian vision. A paper describing the project is required at the end of the semester (6-8 pages two column IEEE format) and we will have an open-house poster session to present the projects. Working project demos are suggested but not required for the poster session. This is a ``projects'' course. Your projects can satisfy a Masters requirement. In most cases, it will involve at least some new/independent research. The last time this course was offered, we had 2 of 8 projects submitted to main conferences (CVPR and ICPR) with both being accepted.

Prerequisites: It is assumed that the students have taken introductory courses in pattern recognition (CSE 555), and computer vision (CSE 573). Machine learning (CSE 574) is suggested but not required. A strong understanding and ability to work with probabilities, statistics, calculus and optimization is expected.

Permission of the instructor is required if these pre-requisites have not been met.

Course Goals: After taking the course, the student should will a clear understanding of the state-of-the-art models and inference algorithms for solving vision problems within a Bayesian methodology. Through completing the course project, the student will also have a deep understanding of the low-level details of a particular model/algorithm and application. The student will have completed some independent research in Bayesian Vision by the end of the course.

Textbooks: There is unfortunately no complete textbook for this course. The required material will either be distributed by the instructor or found on reserve at the UB Library. Recommended textbooks are

  1. Li, S. Markov Random Field Modeling in Image Analysis. Springer-Verlag. 3rd Edition. 2009.

  2. Winkler, G. Image Analysis, Random Fields and Markov Chain Monte Carlo Methods: A Mathematical Introduction. Springer. 2006.

  3. Chalmond, B. Modeling and Inverse Problems in Image Analysis. Springer. 2003.

  4. Bishop, C. M. Pattern Recognition and Machine Learning. Springer. 2007.

Course Outline

The course is roughly divided into two parts. In the first part, we discuss various modeling and associated learning algorithms. In the second part, we discuss the computing and inference algorithms which use the previously discussed models to solve complex inference problems in vision. The topic outline follows; citations are given and an underlined citation indicates a primary (must-read) one. All or most papers are available in PDF at the course directory (location above).

  1. Introduction.

    1. Discussion of Bayesian inference in the context of vision problems. (Winkler, 2006, Chapter 1) (Chalmond, 2003, Chapter 1) (Hanson, 1993)
      Probabilistic Inference Primer: (Griffiths and Yuille, 2006)

    2. Presentation of relevant empirical findings concerning the statistics of images motivating the Bayesian approach. (Field, 1994) (Field, 1987) (Julesz, 1981) (Kersten, 1987) (Ruderman, 1994) (Simoncelli and Olshausen, 2001) (Torralba and Oliva, 2003) (Wu et al., 2007)

    3. Model classes: discriminative, generative and descriptive. (Zhu, 2003)

  2. Modeling and Learning.

    1. Descriptive models on regular lattices.

      1. Markov random field models and Gibbs fields. (Li, 2001, §1.2) (Winkler, 2006, §2,3) (Dubes and Jain, 1989)
      2. The Hammersley-Clifford theorem.
      3. Bayes MRF Estimators (Winkler, 2006, §1.4) (Li, 2001, §1.5) (Geman and Geman, 1984)
      4. Examples:
        1. Auto-Models (Besag, 1974) (Li, 2001, §1.3.1, 2.3, 2.4) (Winkler, 2006, §15)
        2. Weak membrane models, Mumford-Shah, TV, etc.
      5. Applications:
        1. Image Restoration and Denoising (Li, 2001, §2.2)
        2. Edge Detection and Line Processes (Li, 2001, §2.3) (Geman and Geman, 1984)
        3. Texture (Li, 2001, §2.4) (Winkler, 2006, §15,16)
      6. MRF Parameter Estimation (Li, 2001, §6) (Winkler, 2006, §5,6)

        1. Maximum-Likelihood
        2. Pseudo-Likelihood
        3. Gibbs Sampler (and brief introduction to MCMC)

    2. Descriptive Models on Regular Lattices: Advanced Topics

      1. Discontinuities and Smoothness Priors (Li, 2001, §4)

      2. FRAME and Minimax entropy learning of potential functionals. (Zhu et al., 1998) (Zhu et al., 1997) (Coughlan and Yuille, 2003)

      3. Hidden Markov random fields. (Zhang et al., 2001)

      4. Conditional random fields. (Lafferty et al., 2001) (Kumar and Hebert, 2003) (Wallach, 2004) (Ladicky et al., 2009)

      5. MRF as a foundation for multiresolution computing. (Gidas, 1989)

      6. Higher Order Extensions (Kohli et al., 2007) (Kohli et al., 2009)

    3. Descriptive and Generative Models on Irregular Graphs and Hierarchies.

      1. Markov random field hierarchies. (Derin and Elliott, 1987) (Krishnamachari and Chellappa, 1995) (Chardin and Perez, 1999)

      2. Over-Complete Bases and Sparse Coding (Zhu, 2003, §6) (Olshausen and Field, 1997) (Coifman and Wickerhauser, 1992)

      3. Textons (Julesz, 1981) (Zhu et al., 2005) (Malik et al., 1999)

      4. And-Or graphs and context-sensitive grammars. (Zhu and Mumford, 2007) (Han and Zhu, 2005)

      5. Dirichlet Processes (DP) and Bayesian Clustering (Ferguson, 1973)

      6. Latent Dirichlet Allocation, hierarchical DP and author-topic models. (Blei et al., 2003) (Teh et al., 2005) (Steyvers et al., 2004)

      7. Correspondence LDA (Blei and Jordan, 2003)

    4. Integrating Descriptive and Generative Models (Guo et al., 2006)

  3. Inference Algorithms.

    1. Boundary methods.

      1. Level set evolution. (Chan and Vese, 2001)
      2. Region competition algorithm. (Zhu and Yuille, 1996a)

    2. Discrete Deterministic Inference.

      1. Graph-Cuts: $ \alpha$ -Expansion algorithm and min-cut/max-flow relationship. (Boykov et al., 2001) (Kolmogorov and Zabih, 2002a)
      2. Graph-Shifts algorithm. (Corso et al., 2007) (Corso et al., 2008b)
      3. Sum-Product algorithm (exact Belief Propagation). (Bishop, 2006, §8) (Yedidia et al., 2001) (Frey and MacKay, 1997) (Felzenszwalb and Huttenlocher, 2006)

      4. Generalized Belief Propagation. (Yedidia et al., 2005) (Yedidia et al., 2000)

      5. Inference on And-Or graphs. (Zhu and Mumford, 2007) (Han and Zhu, 2005)

    3. Stochastic Inference. (Forsyth et al., 2001)

      1. Gibbs sampling. (Geman and Geman, 1984) (Winkler, 2006, §5,7)
      2. Metropolis-Hastings and Markov chain Monte Carlo methods. (Winkler, 2006, §10) (Tierney, 1994) (Liu, 2002)
      3. Data-Driven MarkovMCMC algorithm. (Tu and Zhu, 2002) (Tu et al., 2005) (Green, 1995)
      4. Swendsen-Wang algorithm. (Swendsen and Wang, 1987) (Barbu and Zhu, 2005) (Barbu and Zhu, 2004)
      5. Sequential MCMC and Particle Filters. (Isard and Blake, 1998) (Liu and Chen, 1998)

Course Work

Homeworks: There will be two homeworks, equally weighted. They will cover both theoretical and practical (implementation) aspects of the material. Students may collectively discuss the homework problems, but they must write them independently. No sharing of written/typed materials of any sort is allowed.

Programming Language: Student choice for homeworks and project (generally, Python, Matlab, Java, or C/C++). However, no platform-specific libraries/packages are permissible.

No sharing any of source code or written/typed materials is permitted. No stealing of any source code or written/typed materials off of the internet is permitted. No utilization of any third-party libraries, other than those explicitly mentioned in the assignment description, is permitted. Refer to the Academic Integrity statement at the end of the syllabus for more information; a zero tolerance policy on cheating will be adopted in this course. This means simply if you cheat once you will get an F.

Grading: Letter grading distributed as follows:

  • Discussion (20%)
  • Homeworks (20%)
  • Project (60%)

Project

The goal of the project is to have each student (or pair of students) solve a real problem using the ideas learned herein. Below is a list of possible projects, but the student is encouraged to design a project of their own in conjunction with the professor. The ultimate goal is for each student to do some new work. Within reason, camera and video equipment will be made available to the students from the Vision Lab. Suitable arrangements should be made with the instructor to facilitate equipment use.

List of Possible Projects

  • Learning and sampling generic image priors such a line processes (1).

  • MRF Potential Learning by Minimax Entropy (1).

  • Sampling Julesz ensemble of textures (1).

  • Action Recognition with a generative model of dynamics (1).

  • Inference by Tree-Reweighted Message Passing (1).

  • Extensions to pictorial structures models for Object Detection (1).

  • Learning and sampling a stochastic graph model (2).

  • Learning and sampling the primal sketch from natural or medical images (2).

Project Schedule

9/27
Project proposal due in class. 1-page description of the proposed project and the type of problem/data. It should include three milestones in planning.

10/18
Milestone 1 Report due in class. (1-paragraph)

11/10
Milestone 2 Report due in class. (1-paragraph) Note, 11/10 is the CVPR paper deadline.

12/10
Final milestone and public poster / demo session (class-time).

12/13 23:59
Project write-up and source code are due.

Project Write-Up

The write-up will be in standard two-column IEEE journal format at a maximum of 10 pages. It should be approached as a standard paper containing introduction and related work, methodology, results, and discussion.

Additional Information

Similar Courses at This and Other Institutions: (incomplete and in no important order)

Course Bibliography

Most items below have been cited above, but there are also some additional references that extend the content of the course. When available, PDFs of articles have been uploaded to the UBLearns ``Course Documents'' section. The naming convention is the first two characters of (up to) the first three authors following by an acronym for the venue (e.g., CVPR for Computer Vision and Pattern Recognition) followed by the year. So, the Geman and Geman 1984 PAMI article is GeGePAMI1984.pdf.

Bibliography

A. Barbu and S. C. Zhu.
Multigrid and Multi-level Swendsen-Wang Cuts for Hierarchic Graph Partitions.
In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 731-738, 2004.

A. Barbu and S. C. Zhu.
Generalizing Swendsen-Wang to Sampling Arbitrary Posterior Probabilities.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 27 (8): 1239-1253, 2005.

J. Besag.
Spatial interaction and the statistical analysis of lattice systems (with discussion).
J. Royal Stat. Soc., B, 36: 192-236, 1974.

J. Besag.
On the statistical analysis of dirty pictures (with discussion).
Journal of the Royal Statistical Society [Ser. B], 48: 259-302, 1986.

C. M. Bishop.
Pattern Recognition and Machine Learning.
Springer, 2006.

C. M. Bishop and J. M. Winn.
Non-linear Bayesian Image Modelling.
In European Conference on Computer Vision, volume 1, pages 3-17, 2000.

D. M. Blei and M. I. Jordan.
Modeling Annotated Data.
In Proceedings of SIGIR, 2003.

D. M. Blei, A. Y. Ng, and M. I. Jordan.
Latent dirichlet allocation.
Journal of Machine Learning Research, 3: 993-1022, 2003.

C. A. Bouman and M. Shapiro.
A multiscale random field model for bayesian image segmentation.
Image Processing, IEEE Transactions on, 3 (2): 162-177, 1994.

Y. Boykov, O. Veksler, and R. Zabih.
Fast Approximate Energy Minimization via Graph Cuts.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 23 (11): 1222-1239, 2001.

B. Chalmond.
Modeling and Inverse Problems in Image Analysis, volume 155 of Applied Mathematical Sciences.
Springer, 2003.

T. F. Chan and L. A. Vese.
Active contours without edges.
IEEE Trans. on Image Processing, 10 (2): 266-277, 2001.

A. Chardin and P. Perez.
Semi-iterative inferences with hierarchical energy-based models for image analysis.
Energy Minimization Methods in Computer Vision and Pattern Recognition: Second International Workshop, EMMCVPR'99, York, UK, July 1999. Proceedings, pages 730-730, 1999.
URL http://www.springerlink.com/content/6yq1rglku6ccxjpu.

R. R. Coifman and M. V. Wickerhauser.
Entropy-based algorithms for best basis selection.
IEEE Transactions on Information Theory, 38 (2): 713-718, 1992.

T.F. Cootes and C.J. Taylor.
Statistical Models of Appearance for Computer Vision.
Technical report, Imaging Science and Biomedical Engineering, University of Manchester, 2004.

J. J. Corso, E. Sharon, and A. Yuille.
Multilevel Segmentation and Integrated Bayesian Model Classification with an Application to Brain Tumor Segmentation.
In Medical Image Computing and Computer Assisted Intervention, volume 2, pages 790-798, 2006.

J. J. Corso, Z. Tu, A. Yuille, and A. W. Toga.
Segmentation of Sub-Cortical Structures by the Graph-Shifts Algorithm.
In N. Karssemeijer and B. Lelieveldt, editors, Proceedings of Information Processing in Medical Imaging, pages 183-197, 2007.

J. J. Corso, E. Sharon, S. Dube, S. El-Saden, U. Sinha, and A. Yuille.
Efficient multilevel brain tumor segmentation with integrated bayesian model classification.
IEEE Transactions on Medical Imaging, 27 (5): 629-640, 2008a.

J. J. Corso, Z. Tu, and A. Yuille.
MRF Labeling with a Graph-Shifts Algorithm.
In Proceedings of International Workshop on Combinatorial Image Analysis, volume LNCS 4958, pages 172-184, 2008b.

J. M. Coughlan and A. L. Yuille.
Algorithms from Statistical Physics for Generative Models of Images.
Image and Vision Computing, Special Issue on Generative-Model Based Vision, 21 (1): 29-36, 2003.

H. Derin and H. Elliott.
Modeling and segmentation of noisy and texture images using gibbs random fields.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 9 (1): 39-55, 1987.

R. C. Dubes and A. K. Jain.
Random field models in image analysis.
Journal of Applied Statistics, 16 (2): 131 - 164, 1989.

L. Fei-Fei and P. Perona.
A Bayesian Hierarchical Model for Learning Natural Scene Categories.
In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2005.

P. F. Felzenszwalb and D. P. Huttenlocher.
Efficient Belief Propagation for Early Vision.
International Journal of Computer Vision, 70 (1), 2006.

T. S. Ferguson.
A bayesian analysis of some nonparametric problems.
The Annals of Statistics, 1 (2): 209-230, 1973.

D. J. Field.
Relations between the statistics of natural images and the response properties of cortical cells.
Journal of the Optical Society of America A, 4 (12): 2379-2394, 1987.

D. J. Field.
What is the goal of sensory coding?
Neural Computation, 6: 559-601, 1994.

D. Forsyth, J. Haddon, and S. Ioffe.
The joy of sampling.
International Journal of Computer Vision, 41 (1): 109-134, 2001.

B. J. Frey and D. MacKay.
A Revolution: Belief Propagation in Graphs with Cycles.
In Proceedings of Neural Information Processing Systems (NIPS), 1997.

S. Geman and D. Geman.
Stochastic Relaxation, Gibbs Distributions, and Bayesian Restoration of Images.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 6: 721-741, 1984.

B. Gidas.
A Renormalization Group Approach to Image Processing Problems.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 11 (2): 164-180, 1989.
ISSN 0162-8828.
doi: tex2html_begingrouprm10.1109/34.16712.

P. J. Green.
Reversible jump markov chain monte carlo computation and bayesian model determination.
Biometrika, 82 (4): 711-732, 1995.

T. L. Griffiths and A. Yuille.
Technical introduction: A primer on probabilistic inference.
Technical report, University of California at Los Angeles, 2006.

C. E. Guo, S. C. Zhu, and Y. N. Wu.
Modeling Visual Patterns by Integrating Descriptive and Generative Models.
International Journal of Computer Vision, 53 (1): 5-29, 2003.

C. E. Guo, S. C. Zhu, and Y. N. Wu.
Primal sketch: Integrating texture and structure.
Computer Vision and Image Understanding, 2006.
(to appear).

F. Han and S. C. Zhu.
Bottom-up/top-down image parsing by attribute graph grammar.
In Proceedings of International Conference on Computer Vision, volume 2, pages 1778-1785, 2005.

K. M. Hanson.
Introduction to Bayesian image analysis.
Medical Imaging: Image Processing, Proc. SPIE 1898: 716-731, 1993.

K. Held, E. R. Kops, B. J. Krause, III. Wells, W. M., R. Kikinis, and H. W. Muller-Gartner.
Markov random field segmentation of brain MR images.
Medical Imaging, IEEE Transactions on, 16 (6): 878-886, 1997.

M. Isard and A. Blake.
CONDENSATION - conditional density propagation for visual tracking.
International Journal of Computer Vision, 29 (1): 5-28, 1998.

B. Julesz.
Textons, the elements of texture perception and their interactions.
Nature, 290: 91-97, 1981.

D. Kersten.
Predictability and Redundancy of Natural Images.
Journal of the Optical Society of America, A 4 (12): 2395-2400, 1987.

P. Kohli, M. P. Kumar, and P. H. S. Torr.
P$ ^3$ & beyond: Solving energies with higher order cliques.
In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2007.

P. Kohli, L. Ladicky, and P. H. S. Torr.
Robust higher order potentials for enforcing label consistency.
International Journal of Computer Vision, 82: 302-324, 2009.

V. Kolmogorov and R. Zabih.
What Energy Functions Can Be Minimized via Graph Cuts?
In European Conference on Computer Vision, volume 3, pages 65-81, 2002a.

Vladimir Kolmogorov and Ramin Zabih.
Multicamera Scene Reconstruction via Graph-Cuts.
In European Conference on Computer Vision, pages 82-96, 2002b.

S. Krishnamachari and R. Chellappa.
Multiresolution gmrf models for texture segmentation.
volume 4, pages 2407-2410 vol.4, 1995.

S. Kumar and M. Hebert.
Discriminative Random Fields: A Discriminative Framework for Contextual Interaction in Classification.
In International Conference on Computer Vision, 2003.

L. Ladicky, C. Russell, P. Kohli, and P. H. S. Torr.
Associative hierarchical crfs for object class image segmentation.
In Proceedings of International Conference on Computer Vision, 2009.

J. Lafferty, A. McCallum, and F. Pereira.
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data.
In Proceedings of International Conference on Machine Learning, pages 282-289, 2001.

C. H. Lee, M. Schmidt, A. Murtha, A. Bistritz, J. Sander, and R. Greiner.
Segmenting brain tumor with conditional random fields and support vector machines.
In Proceedings of Workshop on Computer Vision for Biomedical Image Applications at International Conference on Computer Vision, pages 469-478, 2005.

S. Lee and M. M. Crawford.
Unsupervised multistage image classification using hierarchical clustering with a bayesian similarity measure.
Image Processing, IEEE Transactions on, 14 (3): 312-320, 2005.

S. Z. Li.
Markov Random Field Modeling in Image Analysis.
Springer-Verlag, 2nd edition, 2001.

J. S. Liu.
Monte Carlo Strategies in Scientific Computing.
Springer, 2002.

J. S. Liu and R. Chen.
Sequential monte carlo methods for dynamic systems.
Journal of the American Statistical Society, 93 (443): 1032-1044, 1998.

S. N. MacEachern and P. Muller.
Estimating mixture of dirichlet process models.
Journal of Computational and Graphical Statistics, 7 (2): 223-238, 1998.

J. Malik, S. Belongie, J. Shi, and T. Leung.
Textons, Contours, and Regions: Cue Combination in Image Segmentation.
In International Conference on Computer Vision, 1999.

M. R. Naphade and T. S. Huang.
A Probabilistic Framework for Semantic Video Indexing, Filtering, and Retrieval.
IEEE Transactions on Multimedia, 3 (1): 141-151, 2001.

B. A. Olshausen and D. J. Field.
Sparse coding with an overcomplete basis set: A strategy employed by v1?
Vision Research, 37 (23): 3311-3325, 1997.

A. Raj and R. Zabih.
A graph cut algorithm for generalized image deconvolution.
In Proceedings of International Conference on Computer Vision, 2005.

A. Ranganathan.
The dirichlet process mixture (dpm) model.
URL http://www.cs.rochester.edu/~michalak/mlseminar/fall05/dirichlet.pdf.
September 2004.

S. Richardson and P. J. Green.
On Bayesian Analysis of Mixtures With an Unknown Number of Components.
Journal of the Royal Statistical Society - Series B, 59 (4): 731-758, 1997.

D. L. Ruderman.
The statistics of natural images.
Network: Computation in Neural Systems, 5 (4): 517-548, 1994.

M. Schaap, I. Smal, C. Metz, T. van Walsum, and W. Niessen.
Bayesian Tracking of Elongated Structures in 3D Images.
In N. Karssemeijer and B. Lelieveldt, editors, Proceedings of Information Processing in Medical Imaging, 2007.

E. P. Simoncelli and B. A. Olshausen.
Natural image statistics and neural representation.
Annual Review of Neuroscience, 24: 1193-1216, 2001.

M. Steyvers, P. Smyth, M. Rosen-Zvi, and T. Griffiths.
Probabilistic author-topic models for information discovery.
In 10th ACM SigKDD Conference on Knowledge Discovery and Data Mining, 2004.

E. B. Sudderth, A. Torralba, W. T. Freeman, and A. S. Willsky.
Describing visual scenes using transformed dirichlet processes.
In Proceedings of Neural Information Processing Systems (NIPS), 2005.

R. H. Swendsen and J. S. Wang.
Nonuniversal Critical Dynamics in Monte Carlo Simulations.
Physical Review Letters, 58 (2): 86-88, 1987.

Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei.
Hierarchical dirichlet processes.
In Advances in Neural Information Processing Systems (NIPS) 17, 2005.

L. Tierney.
Markov chains for exploring posterior distributions.
The Annals of Statistics, 22 (4): 1701-1728, 1994.

Phil Torr and C. Davidson.
IMPSAC: Synthesis of Importance Sampling and Random Sample Consensus.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 25 (3): 354-364, 2003.

A. Torralba and A. Oliva.
Statistics of natural image categories.
Network: Computation in Neural Systems, 14: 391-412, 2003.

F. Torre and M. J. Black.
Robust Principal Component Analysis for Computer Vision.
In International Conference on Computer Vision, 2001.

Z. Tu and S. C. Zhu.
Image Segmentation by Data-Driven Markov Chain Monte Carlo.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 24 (5): 657-673, 2002.

Z. Tu, X. R. Chen, A. L. Yuille, and S. C. Zhu.
Image Parsing: Unifying Segmentation, Detection and Recognition.
International Journal of Computer Vision, 63 (2): 113-140, 2005.

H. M. Wallach.
Conditional Random Fields: An Introduction.
CIS MS-CIS-04-21, University of Pennsylvania, 2004.

G. Winkler.
Image Analysis, Random Fields, and Markov Chain Monte Carlo Methods.
Springer, 2nd edition, 2006.

Y. N. Wu, S. C. Zhu, and C. E. Guo.
From Information Scaling of Natural Images to Regimes of Statistical Models.
Quarterly of Applied Mathematics, 2007.

J. S. Yedidia, W. T. Freeman, and Y. Weiss.
Generalized belief propagation.
In Advances in Neural Information Processing Systems (NIPS), volume 13, pages 689-695, 2000.

J. S. Yedidia, W. T. Freeman, and Y. Weiss.
Bethe free energy, Kikuchi approximations and belief propagation algorithms.
Technical Report TR2001-16, Mitsubishi Electronic Research Laboratories, May 2001.

J. S. Yedidia, W. T. Freeman, and Y. Weiss.
Constructing Free-Energy Approximations and Generalized Belief Propagation Algorithms.
IEEE Transactions on Information Theory, 51 (7): 2282-2312, 2005.

Ramin Zabih and Vladimir Kolmogorov.
Spatially Coherent Clustering Using Graph Cuts.
In IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 437-444, 2004.

Y. Zhang, M. Brady, and S. Smith.
Segmentation of brain mr images through a hidden markov random field model and the expectation-maximization algorithm.
IEEE Transactions on Medical Imaging, 20 (1): 45-57, January 2001.

S. C. Zhu.
Stochastic jump-diffusion process for computing medial axes in markov random fields.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 21 (11): 1158-1169, 1999.

S. C. Zhu.
Statistical Modeling and Conceptualization of Visual Patterns.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 25 (6): 691-712, 2003.

S. C. Zhu and D. Mumford.
A stochastic grammar of images.
Foundations and Trends in Computer Graphics and Vision, 2 (4): 259-362, 2007.

S. C. Zhu and D. Mumford.
Prior learning and gibbs reaction-diffusion.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 19 (11): 1236-1250, 1997.

S. C. Zhu and A. Yuille.
Region Competition: Unifying Snakes, Region Growing, and Bayes/MDL for Multiband Image Segmentation.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 18 (9): 884-900, 1996a.

S. C. Zhu and Alan L. Yuille.
FORMS: A Flexible Object Recognition and Modeling System.
International Journal of Computer Vision, 20 (3): 187-212, 1996b.

S. C. Zhu, Y. Wu, and D. Mumford.
Minimax entropy principle and its application to texture modeling.
Neural Computation, 9 (8): 1627-1660, 1997.

S. C. Zhu, Y. N. Wu, and D. B. Mumford.
FRAME: Filters, Random field And Maximum Entropy: -- Towards a Unified Theory for Texture Modeling.
International Journal of Computer Vision, 27 (2): 1-20, 1998.

S. C. Zhu, C. E. Guo, Y. Wang, and Z. Xu.
What are textons?
International Journal of Computer Vision, 62 (1): 121-143, 2005.


last updated: Sat Jun 21 07:38:45 2014; copyright jcorso