Comments on the recent NHS England GP Survey 2016-2017

Comments on the recent NHS England GP Survey 2016-2017

“General Practice is currently struggling. You don’t have to look very hard or very far to realise this. We are struggling to recruit doctors, struggling with increasing patient demand, struggling with paperwork and struggling to find more hours in the day to manage all of this. What we need is support and understanding, which is why I am struggling a little with some recent local newspaper reports ( following publication of the annual General Practice Patient Survey ( This survey asks a small sample of patients to rate their experience of the practice. Questions range from those asking about ease of access to their confidence and trust in the GP they saw. The survey results are great for headlines, but in reality things are somewhat more complicated and there is a danger that people can jump to the wrong conclusions about practices and individual GPs without knowing the full story. 


One of the first things that the results encourages you to do is to compare your practice with others. I’m not sure how helpful this is. While I understand that people want information about the kind of service they can expect to receive in any one particular surgery compared to others in the town, you need to remember that it can be like comparing apples with oranges. The geographical location of each practice along with quirky historical preferences by patients means that every practice has a unique demographic make-up. The ability of the practice to meet the needs and expectations of the population it serves therefore varies considerably. This means, in simple terms, that Practice A  might be able to provide two 2 hour surgeries per day, and still find not all slots filled (although as I write this I can’t think of (m)any that manage this!), while Practice B a few miles away has to put on two 3-4 hour surgeries and still have to see extras to meet the demand. This does not necessarily mean that Practice B is any worse, or providing poorer care than Practice A. 


This brings me to my second point. The indicators looked at and the questions asked of patients do not cover all aspects of care. A large part of the survey seems to be structured around access, around the ability to get to see a GP or nurse. The survey does not look at clinical outcomes, at how well the surgery actually treated their patients. This phenomena is not unique to General Practice, of course, with many of our NHS targets and standards relating to how long we wait for things, rather than about whether the interventions make us better or not. That’s partly because access targets are easy to measure. We need to remember, however, that we should be moving towards measuring what we want to measure rather than just what we can measure. We must start measuring the things that truly reflect quality of care provided and outcomes. Access is an important area of quality to measure, but it is not the only area. We must look at all things together. Care Quality Commission  reports, for example, have not flagged any concerns for the particular practice mentioned as being ‘worst’ in the local article, where a rating of ‘good’ has been applied. Let us not forget this. 


I would also suggest that the way in which this survey is undertaken is flawed. A random sample of your practice population is invited to respond. In the case of the practice highlighted by our local newspaper, only 105 out of 220 questionnaires were returned. Only 48%. This is a tiny proportion of the practice population, and they have not even targeted those who have recently had an appointment. There is a risk that you only bother to complete and return the survey if you have an issue to highlight. If you are happy with your care, you may not find the time to complete it as you do not see it as important. So, a very small sample size (some would say too small to be meaningful), from a cohort who may have an intrinsic bias with regards to how likely they are to respond.


The survey equally does not provide any local context or attempt to look at underlying reasons. I’m sure that those who put the survey in place would say that this is not the job of the survey, and that it is simply to highlight areas to the practice that they may wish to focus on. If that were solely the case then this might provide helpful intelligence for the practice to use to improve. The fact that the results are published on the internet, however, immediately turns these questions into targets which will be reported against by the media and interpreted as such by patients. The risk is that it fuels accusatory and blame-seeking behaviour rather than seeking to understand, support and encourage improvement. 


This latter bit is exactly how the results will be used in practice, however. Highlighting areas to look at and work to improve. Practices take these results seriously because they know patients will form a view of them when reading them. Practices also recognise that the survey can highlight areas that need attention, although more often than not these areas were already known and understood, and actions being taken. 


It would be remiss of me at this point not to thank all local practices for the hard work they have done. Many will be justifiably proud of their excellent survey results alongside the CQC findings across the patch. All are working hard, all are finding things challenging at the moment, all are continuing to provide care for the people they serve. 


My plea would be for everyone to take some time to fully understand and appreciate the pressures being experienced by General Practice at the moment. A good place to start with that would be to read this report in GP Online , and the King’s Fund report it refers to


All parts of the NHS are struggling to cope. Could I suggest we stop pointing fingers and start supporting in order to improve things?”


Weaverham Surgery

About the author