By Rodger Munt, edited by David Trew, Angela Lamacraft & Mike Lotinga, February 2018

Dr Rodger Munt FIOA is a retired scientist with 45 years of research experience. As a long-standing employee of the former Royal Aircraft Establishment, he led successive research teams where vetting of scientific reports for non-technical customers was routine. He regularly sees the reports issued to local authority planning departments in his capacity as a consultee.

This blog is an adaptation of an article published in the Nov-Dec 2015 Acoustics Bulletin, 40 (6), pp16-18.

Part 2: What makes a bad report?

The nature of acoustics is such that there will normally be technical content to demonstrate that a proper study has been undertaken, but its necessity and meaning should be explained in terms that non-acousticians can understand. Thus a journal paper intended for scientists will have different emphasis to one directed to local authorities. Nevertheless the technical content should be robust enough to convince an independent appraiser of the relevance and accuracy of information supplied.

Identifying problems: independent appraisal

When applying professional scepticism, the decision-maker should not necessarily accept evidence at face value and should instead:

critically assess evidence without being overly suspicious or sceptical

corroborate, where necessary, methods used, data collected, proposals and recommendations made

identify information that brings into question the reliability of any documents and evidence

establish whether evidence is misleading, biased, exaggerated, unsubstantiated or contradictory and be prepared to challenge such information

consider whether the person providing the evidence or information lacks competence in key areas, and be prepared to request evidence to confirm the competency of the person(s) submitting the information, providing advice and/or making recommendations

Some examples of technical issues with reports

In appraising reports I have encountered the following problems:

Reports being issued at different times, by the same author and covering the same aspect of an acoustic study, without explanation of why a new report was necessary or why the results were different. The lack of a report number also made them difficult to identify/ reference.

A main report dominated by many pages of remotely logged acoustic data without full analysis and interpretation. The main report should have summarised the recordings in a suitable form, preferably as graphs of noise level as a function of time, whilst the tabulated data could have been made available in a separate addendum.

On occasions, there is a lack of a qualitative assessment of background noise sources that may affect a community. Specifically, it is good practice to listen to and make audio recordings of the background noise at a location, to determine what noise sources are dominating, before deploying remote noise monitors. In one case two nominally similar locations produced contradictory diurnal recordings, but they were not revisited to examine why.

The reason for measuring the meteorological conditions at the same location as acoustic recordings, as required in BS 7445, seems to be misunderstood. There are many cases when the meteorological data is not analysed to establish how it might affect the recordings, e.g. wind noise or rain impact on the microphone, or how distant sources, such as a motorway, may contribute to the background noise through atmospheric refraction in particular meteorological conditions. In some cases, the meteorology from a met station several kilometres from the site is tabulated instead of the local meteorology but the latter, which will be affected by surface topography, can be significantly different.

The effects of local meteorological conditions on the refraction of sound on recorded data are not always understood or examined. In one case a hot cloudless day with little wind was assumed to be ideal conditions in which to measure the characteristics of noise decay between a source and receiver 1 km apart, not realising that these are strong lapse conditions in which acoustic shadow is experienced around a source near the ground and consequently the measured attenuation would not be representative of the average experienced by the community which, in this case, would normally be in the prevailing downwind direction.

One study lacked an appreciation of the highly directional nature of a source, assuming a single sound level value for the source would suffice without explanation of how it was measured and what it represented. The same study did not explain how calculations were performed to account for a configuration change and for propagation over complex terrain, nor did it examine the accuracy for the estimated level of sound predicted at residential properties.

This section of the two-part blog has highlighted some pitfalls that should be avoided. If you missed the first part don’t forget to read part 1 on what makes a good report which can be found here www.ioa.org.uk/draft-excluders-part-1-what-makes-good-report. IOA members are required to maintain and upgrade their professional knowledge and to encourage others to do so. These blogs are just one of many resources available to keep members up to date with technological developments, guidance, standards and regulations. It is particularly important for our younger members (and those not so young but in the early stages of their career in acoustics) to benefit from these resources, which will help to improve their career development and prospects, as well as the quality of their reports.