-
Figshare: a public repository for research data
From the FAQ: figshare is the first online repository for storing and sharing all of your preliminary findings in the form of individual figures, datasets, media or filesets. Post preprint figures on figshare to claim priority and receive feedback on your findings prior to formal publication. You could use use it to upload non-peer-reviewed tech reports,
-
CSHL computational vision: day 4
Today was a little less intense than yesterday, mercifully. Geoff Boynton Geoff did a tutorial on signal detection theory and estimating psychophysical measures in Matlab. He emphasized that given the signal detection model, it is easy to find good estimates using Bayesian inference. Whenever the observer’s response is binary, you should use the binomial likelihood
-
CSHL computational vision: day 3
Heavy day today. These notes might be slightly more rambling than usual, apologies. Eero Simoncelli Eero (pictured above) delivered a lecture focusing on encoding, and specifically on efficient coding . From Barlow (1961): Sensory relays recode sensory messages so that their redundancy is reduced but comparatively little information is lost. He pointed out that this
-
CSHL computational vision: day 2
Eero Simoncelli Eero Simoncelli delivered a talk focusing on linear systems, convolution and Fourier analysis. From an informational theoretical perspective, a linear or nonlinear transformation of the type performed in cortex can only be information-conserving or losing. Thus, there is something interesting about the visual information which is discarded. I could not help thinking of
-
CSHL computational vision: day 1
I’m at the Cold Spring Harbor Lab Computational Neuroscience: Vision class for the next two weeks, and I’ll try to write a steady stream about some of the content covered in the course in addition to lighter subjects. Banbury campus is beautiful, it feels like a movie’s depiction of a New England rich people hideaway
-
Using an L1 penalty with an arbitrary error function
The L1 penalty, which corresponds to a Laplacian prior, encourages model parameters to be sparse. There’s plenty of solvers for the L1 penalized least-squares problem, . It’s harder to find methods for other error functions than the sum-of-squares. L1General by Mark Schmidt solves just such a problem. There’s more than a dozen different algorithms implemented