warbleR
Bioacoustics research encompasses a wide range of questions, study systems and methods, including the software used for analyses. The warbleR
and Rraven
packages leverage the flexibility of the R
environment to offer a broad and accessible bioinformatics tool set. These packages fundamentally rely upon two types of data to perform bioacoustics analyses in R:
Sound files: Recordings in wav or mp3 format, either from your own research or open-access databases like xeno-canto
Selection tables: Selection tables contain the temporal coordinates (start and end points) of selected acoustic signals within recordings
These packages are both available on CRAN: warbleR
, Rraven
, as well as on GitHub: warbleR
, Rraven
. The GitHub repository will always contain the latest functions and updates. You can also check out an article in Methods in Ecology and Evolution documenting the warbleR
package [1].
We welcome all users to provide feedback, contribute updates or new functions and report bugs to warbleR’s GitHub repository.
Please note that warbleR
and Rraven
use functions from the seewave
, monitoR
, tuneR
and dtw
packages internally. warbleR
and Rraven
have been designed to make bioacoustics analyses more accessible to R
users, and such analyses would not be possible without the tools provided by the packages above. These packages should be given credit when using warbleR
and Rraven
by including citations in publications as appropriate (e.g. citation("seewave")
).
warbleR
Parallel processing, or using multiple cores on your machine, can greatly speed up analyses. All iterative warbleR
functions now have parallel processing for Linux, Mac and Windows operating systems. These functions also contain progress bars to visualize progress during normal or parallel processing. See [1] for more details about improved running time using parallel processing.
Below we present a case study of microgeographic vocal variation in long-billed hermit hummingbirds, Phaethornis longirostris. Variation at small geographic scales has been already described in this species [2]. Our goal is to search for visible differences in song structure within a site, and then determine whether underlying differences in acoustic parameters are representative of spectrographic distinctiveness. In this vignette, we will demonstrate how to:
Prepare for bioacoustics analyses by downloading warbleR
and Rraven
Use Rraven
to import Raven selection tables for your own recordings
Obtain recordings from the open-access database xeno-canto
Select signals using warbleR
functions
This vignette can be run without an advanced understanding of R
, as long as you know how to run code in your console. However, knowing more about basic R
coding would be very helpful to modify the code for your research questions.
For more details about function arguments, input or output, read the documentation for the function in question (e.g. ?querxc
).
First, we need to install and load warbleR
and Rraven
. You will need an R
version ≥ 3.2.1 and seewave
version ≥ 2.0.1. Also, users using UNIX
machines (Linux or Mac operating systems), may need to install fftw3
, pkg-config
and libsndfile
on their machines prior to installing warbleR
. These external packages will need to be installed through a UNIX
terminal. Installing these packages lies outside the scope of this vignette, but you can find more information on Google.
### Install packages from CRAN
# Note that if you install from CRAN, then don't run the code to install from GitHub below, and vice versa
install.packages("warbleR")
install.packages("Rraven")
### Alternatively, install warbleR and Rraven from GitHub repositories, which contain the latest updates
# Run this ONLY if devtools is not already installed
install.packages("devtools")
# Load devtools to access the install_github function
library(devtools)
# Install packages from GitHub
# install_github("maRce10/warbleR")
# install_github("maRce10/Rraven")
# install_github("maRce10/NatureSounds")
# Load warbleR and Rraven into your global environment
X <- c("warbleR", "Rraven")
invisible(lapply(X, library, character.only = TRUE))
This vignette series will not always include all available warbleR
functions, as existing functions are updated and new functions are added. To see all functions available in this package:
# The package must be loaded in your working environment
ls("package:warbleR")
# Create a new directory and set your working directory (assuming that you are in your /home/username directory)
dir.create(file.path(getwd(),"warbleR_example"))
setwd(file.path(getwd(),"warbleR_example"))
# Check your location
getwd()
Rraven
is an interface between Raven and R
that allows you to import selection tables for your own recordings. This is very useful if you prefer to select signals in recordings outside of R
. Once you have selection tables imported into R
and the corresponding sound files in your working directory, you can move on to making spectrograms or performing analyses (see the next vignette in this series).
The sound files and selection tables loaded here correspond to male long-billed hermit hummingbird songs recorded at La Selva Biological Station in Costa Rica. Later, we will add to this data set by searching for more recordings on the xeno-canto open-access database.
Check out the Rraven
package documentation for more functions and information (although you will need Raven or Syrinx installed on your computer for some functions).
# Load Raven example selection tables
data("selection_files")
# Write out Raven example selection tables as physical files
out <- lapply(1:2, function(x)
writeLines(selection_files[[x]], con = names(selection_files)[x]))
# Write example sound files out as physical .wav files
data(list = c("Phae.long1", "Phae.long2"))
writeWave(Phae.long1, "Phae.long1.wav")
writeWave(Phae.long2, "Phae.long2.wav")
# Import selections
sels <- imp_raven(all.data = FALSE, freq.cols = FALSE, warbler.format = TRUE)
str(sels)
# Write out the imported selections as a .csv for later use
write.csv(sels, "Raven_sels.csv", row.names = FALSE)
selection table
Downstream warbleR
functions require selection tables in order to run correctly. Use the function selection_table
to convert your data frame into an object of class selection_table
. In future versions of warbleR
, all functions will require selection table objects of class selection_table
.
sels <- selection_table(X = sels)
str(sels)
class(sels)
The open-access xeno-canto database is an excellent source of sound files across avian species. You can query this database by a species or genus of interest. The function querxc
has two types of output:
Metadata of recordings: geographic coordinates, recording quality, recordist, type of signal, etc.
Sound files: Sound files in mp3 format are returned if the argument download
is set to TRUE
.
We recommend downloading metadata first from xeno-canto, as this data can be filtered in R to more efficiently download recordings (e.g. only those relevant to your question).
Here, we will query the xeno-canto database to download more Phaethornis longirostris sound files for our question of how male songs vary at a microgeographic scale.
# Query xeno-canto for all Phaethornis recordings (e.g., by genus)
Phae <- querxc(qword = "Phaethornis", download = FALSE)
# Check out the structure of resulting the data frame
str(Phae)
'data.frame': 899 obs. of 36 variables:
$ Recording_ID : int 406968 403856 403854 387711 275252 261696 261695 261694 261693 261692 ...
$ Genus : chr "Phaethornis" "Phaethornis" "Phaethornis" "Phaethornis" ...
$ Specific_epithet : chr "yaruqui" "yaruqui" "yaruqui" "yaruqui" ...
$ Subspecies : chr "" "" "" "" ...
$ English_name : chr "White-whiskered Hermit" "White-whiskered Hermit" "White-whiskered Hermit" "White-whiskered Hermit" ...
$ Recordist : chr "Myornis" "Myornis" "Myornis" "Jerome Fischer" ...
$ Country : chr "Colombia" "Colombia" "Colombia" "Ecuador" ...
$ Locality : chr "La Chocoana, Guachalito, Nuquí, Chocó" "La Chocoana, Guachalito, Nuquí, Chocó" "La Chocoana, Guachalito, Nuquí, Chocó" "Amagusa Reserve Pichincha" ...
$ Latitude : num 5.627 5.627 5.627 0.161 0.75 ...
$ Longitude : num -77.4 -77.4 -77.4 -78.9 -78.9 ...
$ Vocalization_type: chr "flight call" "flight call" "flight call" "song" ...
$ Audio_file : chr "//www.xeno-canto.org/406968/download" "//www.xeno-canto.org/403856/download" "//www.xeno-canto.org/403854/download" "//www.xeno-canto.org/387711/download" ...
$ License : chr "//creativecommons.org/licenses/by-nc-sa/4.0/" "//creativecommons.org/licenses/by-nc-sa/4.0/" "//creativecommons.org/licenses/by-nc-sa/4.0/" "//creativecommons.org/licenses/by-nc-sa/4.0/" ...
$ Url : chr "//www.xeno-canto.org/406968" "//www.xeno-canto.org/403856" "//www.xeno-canto.org/403854" "//www.xeno-canto.org/387711" ...
$ Quality : chr "A" "A" "A" "A" ...
$ Time : chr "?" "?" "?" "09:00" ...
$ Date : chr "2018-08-00" "2017-08-00" "2017-08-00" "2017-09-24" ...
$ Altitude : chr "30" "10" "10" "1300" ...
$ Spectrogram_small: chr "//www.xeno-canto.org/sounds/uploaded/OXSIOLJJUP/ffts/XC406968-small.png" "//www.xeno-canto.org/sounds/uploaded/OXSIOLJJUP/ffts/XC403856-small.png" "//www.xeno-canto.org/sounds/uploaded/OXSIOLJJUP/ffts/XC403854-small.png" "//www.xeno-canto.org/sounds/uploaded/JPBSNBUUEF/ffts/XC387711-small.png" ...
$ Spectrogram_med : chr "//www.xeno-canto.org/sounds/uploaded/OXSIOLJJUP/ffts/XC406968-med.png" "//www.xeno-canto.org/sounds/uploaded/OXSIOLJJUP/ffts/XC403856-med.png" "//www.xeno-canto.org/sounds/uploaded/OXSIOLJJUP/ffts/XC403854-med.png" "//www.xeno-canto.org/sounds/uploaded/JPBSNBUUEF/ffts/XC387711-med.png" ...
$ Spectrogram_large: chr "//www.xeno-canto.org/sounds/uploaded/OXSIOLJJUP/ffts/XC406968-large.png" "//www.xeno-canto.org/sounds/uploaded/OXSIOLJJUP/ffts/XC403856-large.png" "//www.xeno-canto.org/sounds/uploaded/OXSIOLJJUP/ffts/XC403854-large.png" "//www.xeno-canto.org/sounds/uploaded/JPBSNBUUEF/ffts/XC387711-large.png" ...
$ Spectrogram_full : chr "//www.xeno-canto.org/sounds/uploaded/OXSIOLJJUP/ffts/XC406968-full.png" "//www.xeno-canto.org/sounds/uploaded/OXSIOLJJUP/ffts/XC403856-full.png" "//www.xeno-canto.org/sounds/uploaded/OXSIOLJJUP/ffts/XC403854-full.png" "//www.xeno-canto.org/sounds/uploaded/JPBSNBUUEF/ffts/XC387711-full.png" ...
$ Length : chr "0:02" "0:04" "0:01" "0:50" ...
$ Uploaded : chr "2018-03-23" "2018-02-28" "2018-02-28" "2017-09-27" ...
$ Other_species : chr "" "Coereba flaveola" "" "" ...
$ Remarks : chr "In forest." "Forest border." "Forest border." "" ...
$ Bird_seen : chr "yes" "yes" "yes" "yes" ...
$ Playback_used : chr "no" "no" "no" "no" ...
$ Other_species1 : chr NA NA NA NA ...
$ Other_species2 : chr NA NA NA NA ...
$ Other_species3 : chr NA NA NA NA ...
$ Other_species4 : chr NA NA NA NA ...
$ Other_species5 : chr NA NA NA NA ...
$ Other_species6 : chr NA NA NA NA ...
$ Other_species7 : chr NA NA NA NA ...
$ Other_species8 : chr NA NA NA NA ...
# Query xeno-canto for all Phaethornis longirostris recordings
Phae.lon <- querxc(qword = "Phaethornis longirostris", download = FALSE)
# Check out the structure of resulting the data frame
str(Phae.lon)
'data.frame': 85 obs. of 31 variables:
$ Recording_ID : int 497036 495384 433645 402755 355350 282529 274377 271499 154138 154129 ...
$ Genus : chr "Phaethornis" "Phaethornis" "Phaethornis" "Phaethornis" ...
$ Specific_epithet : chr "longirostris" "longirostris" "longirostris" "longirostris" ...
$ Subspecies : chr "" "cephalus" "" "" ...
$ English_name : chr "Long-billed Hermit" "Long-billed Hermit" "Long-billed Hermit" "Long-billed Hermit" ...
$ Recordist : chr "Jerome Fischer" "Guy Kirwan" "Oscar Campbell" "Marilyn Castillo" ...
$ Country : chr "Panama" "Panama" "Panama" "Mexico" ...
$ Locality : chr "Achiote Road, Colón Province" "Panama Rainforest Discovery Centre" "Panama Rainforest Discovery Centre" "Boca de Chajul, Marqués de Comillas, Chiapas" ...
$ Latitude : num 9.2 9.13 9.13 16.13 8.94 ...
$ Longitude : num -80 -79.7 -79.7 -90.9 -78.5 ...
$ Vocalization_type: chr "song" "call" "song" "alarm call" ...
$ Audio_file : chr "//www.xeno-canto.org/497036/download" "//www.xeno-canto.org/495384/download" "//www.xeno-canto.org/433645/download" "//www.xeno-canto.org/402755/download" ...
$ License : chr "//creativecommons.org/licenses/by-nc-sa/4.0/" "//creativecommons.org/licenses/by-nc-sa/4.0/" "//creativecommons.org/licenses/by-nc-sa/4.0/" "//creativecommons.org/licenses/by-nc-nd/4.0/" ...
$ Url : chr "//www.xeno-canto.org/497036" "//www.xeno-canto.org/495384" "//www.xeno-canto.org/433645" "//www.xeno-canto.org/402755" ...
$ Quality : chr "no score" "A" "A" "A" ...
$ Time : chr "09:30" "09:00" "08:30" "07:30" ...
$ Date : chr "2019-02-10" "2019-07-04" "2018-07-10" "2016-01-18" ...
$ Altitude : chr "30" "70" "70" "180" ...
$ Spectrogram_small: chr "//www.xeno-canto.org/sounds/uploaded/JPBSNBUUEF/ffts/XC497036-small.png" "//www.xeno-canto.org/sounds/uploaded/BZVYBRUAAE/ffts/XC495384-small.png" "//www.xeno-canto.org/sounds/uploaded/RFRTVEHIZX/ffts/XC433645-small.png" "//www.xeno-canto.org/sounds/uploaded/CHRIDPJVMH/ffts/XC402755-small.png" ...
$ Spectrogram_med : chr "//www.xeno-canto.org/sounds/uploaded/JPBSNBUUEF/ffts/XC497036-med.png" "//www.xeno-canto.org/sounds/uploaded/BZVYBRUAAE/ffts/XC495384-med.png" "//www.xeno-canto.org/sounds/uploaded/RFRTVEHIZX/ffts/XC433645-med.png" "//www.xeno-canto.org/sounds/uploaded/CHRIDPJVMH/ffts/XC402755-med.png" ...
$ Spectrogram_large: chr "//www.xeno-canto.org/sounds/uploaded/JPBSNBUUEF/ffts/XC497036-large.png" "//www.xeno-canto.org/sounds/uploaded/BZVYBRUAAE/ffts/XC495384-large.png" "//www.xeno-canto.org/sounds/uploaded/RFRTVEHIZX/ffts/XC433645-large.png" "//www.xeno-canto.org/sounds/uploaded/CHRIDPJVMH/ffts/XC402755-large.png" ...
$ Spectrogram_full : chr "//www.xeno-canto.org/sounds/uploaded/JPBSNBUUEF/ffts/XC497036-full.png" "//www.xeno-canto.org/sounds/uploaded/BZVYBRUAAE/ffts/XC495384-full.png" "//www.xeno-canto.org/sounds/uploaded/RFRTVEHIZX/ffts/XC433645-full.png" "//www.xeno-canto.org/sounds/uploaded/CHRIDPJVMH/ffts/XC402755-full.png" ...
$ Length : chr "0:27" "0:49" "0:58" "0:03" ...
$ Uploaded : chr "2019-09-13" "2019-09-01" "2018-09-09" "2018-02-14" ...
$ Other_species : chr "" "" "" "" ...
$ Remarks : chr "" "Lek with up to three different individuals. Focal bird was perched 1.5 m above ground on a 45-degree angle twig." "Male seen on lek; 3 feet above ground in gap in undergrowth. Calling incessantly and quivering tail as doing so"| __truncated__ "" ...
$ Bird_seen : chr "yes" "yes" "yes" "yes" ...
$ Playback_used : chr "no" "no" "no" "no" ...
$ Other_species1 : chr NA NA NA NA ...
$ Other_species2 : chr NA NA NA NA ...
$ Other_species3 : chr NA NA NA NA ...
You can then use the function xcmaps
to visualize the geographic spread of the queried recordings. xcmaps
will create an image file of a map per species in your current directory if img = TRUE
. If img = FALSE
, maps will be displayed in the graphics device.
# Image type default is jpeg, but tiff files have better resolution
# When the data frame contains multiple species, this will yield one map per species
xcmaps(X = Phae, img = TRUE, it = "tiff") # all species in the genus
xcmaps(X = Phae.lon, img = FALSE) # a single species
In most cases, you will need to filter the xeno-canto metadata by type of signal you want to analyze. When you subset the metadata, you can input the filtered metadata back into querxc
to download only the selected recordings. There are many ways to filter data in R, and the example below can be modified to fit your own data.
Here, before downloading the sound files themselves from xeno-canto, we want to ensure that we select high quality sound files that contain songs (rather than other acoustic signal types) and were also recorded at La Selva Biological Station in Costa Rica.
# How many recordings are available for Phaethornis longirostris?
nrow(Phae.lon)
[1] 85
# How many signal types exist in the xeno-canto metadata?
levels(Phae.lon$Vocalization_type)
NULL
# How many recordings per signal type?
table(Phae.lon$Vocalization_type)
300 alarm call call calls flight call lek, song lekking song
1 1 6 1 2 2 1 68
song at lek song, wing whirrs
2 1
# Filter the metadata to select the signals we want to retain
# First by quality
Phae.lon <- Phae.lon[Phae.lon$Quality == "A", ]
nrow(Phae.lon)
[1] 12
# Then by signal type
Phae.lon.song <- Phae.lon[grep("song", Phae.lon$Vocalization_type, ignore.case = TRUE), ]
nrow(Phae.lon.song)
[1] 9
# Finally by locality
Phae.lon.LS <- Phae.lon.song[grep("La Selva Biological Station, Sarapiqui, Heredia", Phae.lon.song$Locality, ignore.case = FALSE), ]
# Check resulting data frame, 6 recordings remain
str(Phae.lon.LS)
'data.frame': 3 obs. of 31 variables:
$ Recording_ID : int 154138 154129 154072
$ Genus : chr "Phaethornis" "Phaethornis" "Phaethornis"
$ Specific_epithet : chr "longirostris" "longirostris" "longirostris"
$ Subspecies : chr "" "" ""
$ English_name : chr "Long-billed Hermit" "Long-billed Hermit" "Long-billed Hermit"
$ Recordist : chr "Marcelo Araya-Salas" "Marcelo Araya-Salas" "Marcelo Araya-Salas"
$ Country : chr "Costa Rica" "Costa Rica" "Costa Rica"
$ Locality : chr "La Selva Biological Station, Sarapiqui, Heredia" "La Selva Biological Station, Sarapiqui, Heredia" "La Selva Biological Station, Sarapiqui, Heredia"
$ Latitude : num 10.4 10.4 10.4
$ Longitude : num -84 -84 -84
$ Vocalization_type: chr "song" "song" "song"
$ Audio_file : chr "//www.xeno-canto.org/154138/download" "//www.xeno-canto.org/154129/download" "//www.xeno-canto.org/154072/download"
$ License : chr "//creativecommons.org/licenses/by-nc-sa/3.0/" "//creativecommons.org/licenses/by-nc-sa/3.0/" "//creativecommons.org/licenses/by-nc-sa/3.0/"
$ Url : chr "//www.xeno-canto.org/154138" "//www.xeno-canto.org/154129" "//www.xeno-canto.org/154072"
$ Quality : chr "A" "A" "A"
$ Time : chr "14:18" "10:09" "7:05"
$ Date : chr "2010-05-21" "2010-05-21" "2010-05-28"
$ Altitude : chr "" "" ""
$ Spectrogram_small: chr "//www.xeno-canto.org/sounds/uploaded/EMCWQLLKEW/ffts/XC154138-small.png" "//www.xeno-canto.org/sounds/uploaded/EMCWQLLKEW/ffts/XC154129-small.png" "//www.xeno-canto.org/sounds/uploaded/EMCWQLLKEW/ffts/XC154072-small.png"
$ Spectrogram_med : chr "//www.xeno-canto.org/sounds/uploaded/EMCWQLLKEW/ffts/XC154138-med.png" "//www.xeno-canto.org/sounds/uploaded/EMCWQLLKEW/ffts/XC154129-med.png" "//www.xeno-canto.org/sounds/uploaded/EMCWQLLKEW/ffts/XC154072-med.png"
$ Spectrogram_large: chr "//www.xeno-canto.org/sounds/uploaded/EMCWQLLKEW/ffts/XC154138-large.png" "//www.xeno-canto.org/sounds/uploaded/EMCWQLLKEW/ffts/XC154129-large.png" "//www.xeno-canto.org/sounds/uploaded/EMCWQLLKEW/ffts/XC154072-large.png"
$ Spectrogram_full : chr "//www.xeno-canto.org/sounds/uploaded/EMCWQLLKEW/ffts/XC154138-full.png" "//www.xeno-canto.org/sounds/uploaded/EMCWQLLKEW/ffts/XC154129-full.png" "//www.xeno-canto.org/sounds/uploaded/EMCWQLLKEW/ffts/XC154072-full.png"
$ Length : chr "2:58" "2:25" "3:09"
$ Uploaded : chr "2013-11-11" "2013-11-11" "2013-11-11"
$ Other_species : chr "" "" ""
$ Remarks : chr "Recording equipment: Marantz Pmd 660+sennheiser ME67. Comments: individuo-AK; lek-Sura. Primera parte de grabac"| __truncated__ "Recording equipment: Marantz Pmd 660+sennheiser ME67. Comments: individuo-AK; lek-Sura. video 511, revisar. 9 m"| __truncated__ "Recording equipment: Marantz Pmd 660+sennheiser ME67. Comments: individuo-VA; lek-CCL. en percha ded VA, probab"| __truncated__
$ Bird_seen : chr "yes" "yes" "yes"
$ Playback_used : chr "no" "no" "no"
$ Other_species1 : chr NA NA NA
$ Other_species2 : chr NA NA NA
$ Other_species3 : chr NA NA NA
We can check if the location coordinates make sense (all recordings should be from a single place in Costa Rica) by making a map of these recordings using xcmaps
.
# map in the RStudio graphics device (img = FALSE)
xcmaps(Phae.lon.LS, img = FALSE)
Once you’re sure you want the recordings, use querxc
to download the files. Also, save the metadata as a .csv file.
# Download sound files
querxc(X = Phae.lon.LS)
# Save the metadata object as a .csv file
write.csv(Phae.lon.LS, "Phae_lon.LS.csv", row.names = FALSE)
xeno-canto maintains recordings in mp3 format due to file size restrictions. However, we require wav format for all downstream analyses. Compression from wav to mp3 and back involves information losses, but recordings that have undergone this transformation have been successfully used in research [3].
To convert mp3 to wav, we can use the warbleR function mp32wav
, which relies on a underlying function from the tuneR
package. This function does not always work (and it remains unclear as to why!). This bug should be fixed in future versions of tuneR
. If RStudio aborts when running mp32wav
, use an mp3 to wav converter online, or download the open source software Audacity
(available for Mac, Linux and Windows users).
After mp3 files have been converted, we need to check that the wav files are not corrupted and can be read into RStudio (some wav files can’t be read due to format or permission issues).
# Always check you're in the right directory beforehand
# getwd()
# here we are downsampling the original sampling rate of 44.1 kHz to speed up downstream analyses in the vignette series
mp32wav(samp.rate = 22.05)
# Use checkwavs to see if wav files can be read
checkwavs()
We now have .wav files for existing recordings ( Phae.long1.wav through Phae.long4.wav, representing recordings made in the field) as well as 6 recordings downloaded from xeno-canto. The existing Phae.long*.wav recordings have associated selection tables that were made in Raven, but the xeno-canto have no selection tables, as we have not parsed these sound files to select signals within them.
Depending on your question(s), you can combine your own sound files and those from xeno-canto
into a single data set (after ground-truthing). This is made possible by the fact that warbleR
functions will read in all sound files present in your working directory.
For the main case study in this vignette, we will move forwards with only the xeno-canto
sound files. We will use the example sound files when demonstrating warbleR
functions that are not mandatory for the case study but may be useful for your own workflow (e.g. consolidate
below).
To continue the workflow, remove all example wav files from your working directory
# Make sure you are in the right working directory
# Note that all the example sound files begin with the pattern "Phae.long"
wavs <- list.files(pattern = "wav$")
wavs
rm <- wavs[grep("Phae.long", wavs)]
file.remove(rm)
# Check that the right wav files were removed
# Only xeno-cant wav files should remain
list.files(pattern = "wav$")
Since warbleR
handles sound files in working directories, it’s good practice to keep sound files associated with the same project in a single directory. If you’re someone who likes to make a new directory for every batch of recordings or new analysis associated with the same project, you may find the consolidate
function useful.
In case you have your own recordings in wav format and have skipped previous sections, you must specify the location of the sound files you will use prior to running downstream functions by setting your working directory again.
# For this example, set your working directory to an empty temporary directory
setwd(tempdir())
# Here we will simulate the problem of having files scattered in multiple directories
# Load .wav file examples from the NatureSounds package
data(list = c("Phae.long1", "Phae.long2", "Phae.long3"))
# Create first folder inside the temporary directory and write new .wav files inside this new folder
dir.create("folder1")
writeWave(Phae.long1, file.path("folder1","Phae_long1.wav"))
writeWave(Phae.long2, file.path("folder1","Phae_long2.wav"))
# Create second folder inside the temporary directory and write new .wav files inside this second new folder
dir.create("folder2")
writeWave(Phae.long3, file.path("folder2","Phae_long3.wav"))
# Consolidate the scattered files into a single folder, and make a .csv file that contains metadata (location, old and new names in the case that files were renamed)
invisible(consolidate(path = tempdir(), save.csv = TRUE))
list.files(path = "./consolidated_folder")
# set your working directory back to "/home/user/warbleR_example" for the rest of the vignette, or to whatever working directory you were using originally
lspec
produces image files with spectrograms of whole sound files split into multiple rows. It is a useful tool for filtering by visual inspection.
lspec
allows you to visually inspect the quality of the recording (e.g. amount of background noise) or the type, number, and completeness of the vocalizations of interest. You can discard the image files and recordings that you no longer want to analyze.
First, adjust the function arguments as needed. We can work on a subset of the recordings by specifying their names with the flist
argument.
# Create a vector of all the recordings in the directory
wavs <- list.files(pattern = "wav$")
# Print this object to see all sound files
# 6 sound files from xeno-canto
wavs
# Select a subset of recordings to explore lspec() arguments
# Based on the list of wav files we created above
sub <- wavs[c(1, 5)]
# How long are these files? this will determine number of pages returned by lspec
wavdur(sub)
# ovlp = 10 to speed up function
# tiff image files are better quality and are faster to produce
lspec(flist = sub, ovlp = 10, it = "tiff")
# We can zoom in on the frequency axis by changing flim,
# the number of seconds per row, and number of rows
lspec(flist = sub, flim = c(2, 10), sxrow = 6, rows = 15, ovlp = 10, it = "tiff")
Once satisfied with the argument settings we can make long spectrograms for all the sound files.
# Make long spectrograms for the xeno-canto sound files
lspec(flim = c(2, 10), ovlp = 10, sxrow = 6, rows = 15, it = "jpeg", flist = fl)
# Concatenate lspec image files into a single PDF per recording
# lspec images must be jpegs to do this
lspec2pdf(keep.img = FALSE, overwrite = TRUE)
The pdf image files (in the working directory) for the xeno-canto recordings should look like this:
The sound file name and page number are placed in the top right corner. The dimensions of the image are made to letter paper size for printing and subsequent visual inspection.
Recording 154123 has a lot of background noise. Delete the wav file for this recording to remove it from subsequent analyses.
warbleR has two main functions for selecting acoustic signals within recordings. autodetec
automatically detects the start and end of signals in sound files based on amplitude, duration, and frequency range attributes. manualoc
provides an interactive interface in the graphics device to manually select signals
Both functions are fastest with shorter recordings, but there are ways to deal with larger recordings (an hour long or more). In this section we have expanded on some important function arguments, but check out the function documentation for more information.
autodetec
autodetec
has 2 types of output:** + data frame with recording name, selection, start and end times. The last two are temporal coordinates that will be passed on to downstream functions to measure acoustic parameters + a spectrogram per recording, with red dotted lines marking the start and end of each detected signal, saved in your working directory
Check out the autodetec
documentation for more information. The argument threshold
is one of the most important detection parameters, as well as information about signal frequency range and duration. Phaethornis longirostris songs have frequencies between 2 and 10 kHz and durations between 0.05 and 0.5 s.
If you need to detect all or most of the signals within the recording, play around with different arguments to increase detection accuracy. Since you may need to do several rounds of optimization, we recommend using subsets of your recordings to speed up the process. If the species you study produces stereotyped signals, like Phaethornis longirostris, just a few signals are needed per recording, and a low-accuracy detection could yield enough selections.
autodetec
does not replace visual inspection of selected signals. Ensure that you set aside the time to inspect all selected signals for accuracy. You will often need to run detection functions several times, and in the process you will get to know your signals better (if you don’t already).
# Select a subset of sound files
# Reinitialize the wav object
wavs <- list.files(pattern = ".wav$", ignore.case = TRUE)
# Set a seed so we all have the same results
set.seed(1)
sub <- wavs[sample(1:length(wavs), 3)]
# Run autodetec() on subset of recordings
# The data frame object output is printed to the console, we are not saving this in an object yet, since we are just playing around with argument settings
# you can run this in parallel to speed up computation time
autodetec(flist = sub, bp = c(1, 10), threshold = 10, mindur = 0.05, maxdur = 0.5, envt="abs", ssmooth = 300, ls = TRUE, res = 100, flim = c(1, 12), wl = 300, set = TRUE, sxrow = 6, rows = 15, redo = FALSE)
Check out the image files in your working directory. Note that some songs were correctly detected but other undesired sounds were also detected. In most cases, the undesired selections have a shorter duration than our target signals.
We won’t save the autodetec
ouput in an object until we’re satisfied with the detection. To improve our detection we should play around with argument values. Also note that the image files produced by autodetec
contain the values used for the different arguments, which can help you better compare between runs. Below are some detection parameters that work well for these Phaethornis longirostris recordings:
autodetec(flist = sub, bp = c(2, 10), threshold = 20, mindur = 0.09, maxdur = 0.22, envt = "abs", ssmooth = 900, ls = TRUE, res = 100, flim= c(1, 12), wl = 300, set =TRUE, sxrow = 6, rows = 15, redo = TRUE, it = "tiff", img = TRUE, smadj = "end")
This seems to provide a good detection for most recordings (recording ID 154161):
Once we’re satisfied with the detection, we can run the autodetec
on all the recordings, removing the argument flist
(so autodetec
runs over all wav files in the working directory). We will also save the temporal output in an object.
Phae.ad <- autodetec(bp = c(2, 10), threshold = 20, mindur = 0.09, maxdur = 0.22, envt = "abs", ssmooth = 900, ls = TRUE, res = 100, flim = c(2, 10), wl = 300, set =TRUE, sxrow = 6, rows = 15, redo = TRUE, it = "tiff", img = TRUE, smadj = "end")
Let’s look at the number of selections per sound file:
table(Phae.ad$sound.files)
Signal-to-noise ratio (SNR) can be a useful filter for automated signal detection. When background noise is detected as a signal it will have a low SNR, and this characteristic can be used to remove background noise from the autodetec
selection table. SNR = 1 means the signal and background noise have the same amplitude, so signals with SNR <= 1 are poor quality. SNR calculations can also be used for different purposes throughout your analysis workflow.
snrspecs
is a function in the family of spectrogram creators that allows you to pick a margin for measuring noise. These margins are very important for calculating SNR, especially when working with signals separated by short gaps (e.g. duets).
# A margin that's too large causes other signals to be included in the noise measurement
# Re-initialize X as needed, for either autodetec or manualoc output
# Try this with 10% of the selections first
# Set a seed first, so we all have the same results
set.seed(5)
X <- Phae.ad[sample(1:nrow(Phae.ad),(nrow(Phae.ad)*0.05)), ]
nrow(X)
snrspecs(X = X, flim = c(2, 10), snrmar = 0.5, mar = 0.7, it = "jpeg")
Check out the image files in your working directory. This margin overlaps neighboring signals, so a smaller margin would be better.
# This smaller margin is better
snrspecs(X = X, flim = c(2, 10), snrmar = 0.04, mar = 0.7, it = "jpeg")
Once we’ve picked an SNR margin we can move forward with the SNR calculation. We will measure SNR on every other selection to speed up the process.
Phae.snr <- sig2noise(X = Phae.ad[seq(1, nrow(Phae.ad), 2), ], mar = 0.04)
As we just need a few songs to characterize individuals (here sound files are equivalent to different individuals), we can choose selections with the highest SNR per sound file. In this example, we will choose 5 selections per recording with the highest SNRs.
Phae.hisnr <- Phae.snr[ave(-Phae.snr$SNR, Phae.snr$sound.files, FUN = rank) <= 5, ]
# save the selections as a physical file
write.csv(Phae.hisnr, "Phae_hisnr.csv", row.names = FALSE)
# Double check the number of selection per sound files
# Only the xeno-canto sound files will have 5 selections, the other sound files started off with less than 5 selections
table(Phae.hisnr$sound.files)
manualoc
Note: manualoc
will be deprecated in future versions of warbleR
manualoc
is a function that provides an interactive interface to select signals. Check out the manualoc
documentation prior to running this example. This function makes a selection table as a .csv file in your working directory.
manualoc
now opens an external graphics device for selecting signals. You can stop the function at any point by clicking twice on the stop
button.
# Run manualoc() with frequency range set for Phaethornis longirostris
# Recording comments are enabled to mark recording quality
# Selection comments enabled to include visual classifications
manualoc(flim = c(2, 10), reccomm = TRUE, selcomm = TRUE, osci = TRUE, seltime = 2)
# Read manualoc() output back into R as an object
# This data frame object can be used as input for later functions
manualoc_out <- read.csv("manualoc_output.csv", header = TRUE)
Here we have given examples of how to begin the warbleR
workflow. Note that there are many different ways to begin the workflow, depending on your question and source of data. After running the code in this first vignette, you should now have an idea of:
warbleR
The next vignette will cover the second phase of the warbleR workflow, which includes methods to visualize signals for quality control and classification.
Please cite warbleR
when you use the package:
Araya-Salas, M. and Smith-Vidaurre, G. (2017), warbleR: an R package to streamline analysis of animal acoustic signals. Methods Ecol Evol. 8, 184-191.
Please report any bugs here.
Araya-Salas, M. and G. Smith-Vidaurre. 2016. warbleR: an R package to streamline analysis of animal acoustic signals. Methods in Ecology and Evolution. doi: 10.1111/2041-210X.12624
Araya-Salas, M. and T. Wright. 2013. Open-ended song learning in a hummingbird. Biology Letters. 9 (5). doi: 10.1098/rsbl.2013.0625
Medina‐García, Angela, M. Araya‐Salas, and T. Wright. 2015. Does vocal learning accelerate acoustic diversification? Evolution of contact calls in Neotropical parrots. Journal of Evolutionary Biology. doi: 10.1111/jeb.12694