Repeated counts of animal abundance can reveal changes in local ecosystem health and inform conservation strategies. Unmanned aircraft systems (UAS), also known as drones, are commonly used to photograph animals in remote locations; however, counting animals in images is a laborious task. Crowd-sourcing can reduce the time required to conduct these censuses considerably, but must first be validated against expert counts to measure sources of error. Our objectives were to assess the accuracy and precision of citizen science counts and make recommendations for future citizen science projects. We uploaded drone imagery from Año Nuevo Island (California, USA) to a curated Zooniverse website that instructed citizen scientists to count seals and sea lions. Across 212 days, over 1,500 volunteers counted animals in 90,000 photographs. We quantified the error associated with several descriptive statistics to extract a single citizen science count per photograph from the 15 repeat counts and then compared the resulting citizen science counts to expert counts. Although proportional error was relatively low (9% for sea lions and 5% for seals during the breeding seasons) and improved with repeat sampling, the 12+ volunteers required to reduce error was prohibitively slow, taking on average 6 weeks to estimate animals from a single drone flight covering 25 acres, despite strong public outreach efforts. The single best algorithm was 'Median without the lowest two values', demonstrating that citizen scientists tended to under-estimate the number of animals present. Citizen scientists accurately counted adult seals, but accuracy was lower when sea lions were present during the summer and could be confused for seals. We underscore the importance of validation efforts and careful project design for researchers hoping to combine citizen science with imagery from drones, occupied aircraft, and/or remote cameras.