Personal homepage of Andrey Anikin, mostly devoted to vocal communication in animals, including non-linguistic vocalizations in humans.
Don't miss! An experiment on emotional vocalizations is running at cogsci.se/experiments.html
- Jan 23, 2018: another minor patch of soundgen, v1.1.2. Having used the package extensively for experimental work, I discovered and fixed a number of glitches and added some new functionality: a general scale factor for regulating formant bandwidth, schwa() function for working with formant frequencies, better plotting options, etc.
- Dec 02, 2017: soundgen v1.1.1 is on CRAN. This release is primarily a patch that fixes quite a number of small bugs. Notable new features include support for discontinuous contours allowing rapid transitions like pitch jumps, dedicated functions for post-processing like fading in/out or adding formants to an existing sound, and a one-click formant picker in the interactive app.
- Oct 19, 2017: soundgen v1.1.0 is on CRAN. Formants are synthesized "properly" in this release, with zero-pole or pole-pole models. Other improvements include synthesis of individual glottal pulses with a closed phase, dynamic control of rolloff and amplitude modulation, new functions for generating percussive sounds and raspberries, and more.
- Sep 30, 2017: nonverbal vocal communication in humans is all about call types, not emotions. In this sense we are just another mammal, albeit a rather wordy one. This sweeping argument, appropriately toned down and backed up by some experimental evidence from verbal classifications and a triad task, has been put forward in this article published in the Journal of Nonverbal Behavior.
- Sep 4, 2017: the first official release of soundgen v1.0.0 is published on CRAN. Soundgen is an open-source library written in R. It offers tools for parametric voice synthesis and acoustic analysis.
- Jan 09, 2017: research on authenticity in vocal expressions is published in the Quarterly Journal of Experimental Psychology (pdf). The results show that listeners can often tell whether someone is laughing, screaming etc because they really feel amused, scared etc, or whether they are simply faking it. The differences can be acoustically subtle, however, and are much more obvious for some emotions and vocalizations than for others.
- May 02, 2016: validated corpus of YouTube vocalizations is published in Behavioral Research Methods (pdf). This is the first large collection of authentic human non-linguistic vocalizations recorded in real-life contexts. Interestingly, the context was guessed equally well by listeners with very different cultural and linguistic backgrounds.