2015 Veterinary Economics Career and Family Survey

Welcome to the first of a series of posts exploring the data from the 2015 Veterinary Economics Career and Family Survey.

Data source and research plan
Ryan and I wrote the questions for this survey, and I’m going to spend most of this initial piece talking about the quality of the responses, rather than the data in the responses.  We need to get in the habit of approaching economic data as rigorously as we approach medical data.  Consistent with the AVMA Principles of Veterinary Medical Ethics, we should not use products supported solely by proprietary research unless that research is made available for scrutiny, nor should we accept claims about those products that are not supported by the research made available.

This was not a commissioned survey.  Rather, Ryan and I sat down and asked ourselves, “What do we want to know?”  And the answer was obvious:

So we set ourselves to the task of identifying what data we’d need, and writing survey questions that would get it for us.

 

When we wrote the questions we paid a lot of attention to best practices, like these.  We were asking for data from our friends and coworkers, our bosses and employees, our spouses and parents. Talk about eating your own cooking... you bet we were careful about what we asked and how we asked it. With examples to learn from like these surveys coming through our inboxes from Big Veterinary, we think were able to avoid leading questions and false choices. But see what you think- here is the final product as dvm360 distributed it.  

 

Survey instruments should be provided for those consuming research on survey data.  This serves at least two critical functions; first, if the questions and data are good, they are available for other researchers to use and build upon, rather than everyone wasting resources to reinvent their own wheel.  Second, and more importantly, if the data are influenced by the questions in ways you don’t see or that aren’t clear in the data themselves, making the survey instrument public gives other people a chance to see what you don’t, and for the profession's sake improve upon what's been done.  We need to develop and share validated instruments for research done on veterinary medical professionals just as we do for research done by veterinary medical professional.

 

Deploying the survey

Veterinary Economics distributed this to its readership; Ryan and I distributed links to the survey via social media (follow us on Twitter! @JustavetFromSDN & @RGGates), individuals picked it up from there and distributed it to various online discussion groups and professional listservs, and several allied organizations and state VMAs distributed it to their memberships as well.  

 

The survey was open for three months; in that time a total of 808 responses were received.  51 were excluded from all analysis: most because they were not from veterinarians, the remainder because they provided no answers beyond the basic demographic questions of gender and location.  Then there was the respondent who said he was Octomom… may he be damned to a life consuming products developed and marketed on the basis of his own answers to surveys!

 

We ended up with 741 remarkably complete, usable responses.  This could be the hallmark of a well written survey delivered to an appropriate audience- or it could signify a high degree of self selection.  Of 741, all 741 gave a gender, 735 gave a school, 724 gave details on their debt both at graduation and now; 718 answered questions about practice and home ownership; 718 also gave their marital status, 706 answered detailed questions about having kids.  All 741 gave a country.  Of 700 respondents currently within the US, 679 gave localizable zip codes.  

 

Ryan and I will focus largely on the 700 US respondents for most of our posts.  But… we’ve only got 700 respondents; there’s 120,000ish vets in the US.  How do we know whether these 700 respondents are an accurate sampling of all the vets in the US?  We didn’t stratify our sampling! We didn’t weight any responses! Well, … we don’t think we need to, and here’s why.

Assessing data quality

Rather than attempt to make sure we *sent* the survey to a randomized sample, we’re going to check how the responses we *got* are actually distributed.

 

We can do that by comparing the age, school and gender of our respondents to that of all US vet school graduates as reported by IPEDS;  IPEDS is an objective third party which maintains a public database of standardized information from all US higher education programs, including veterinary school graduates. While they can’t be exact correlations due to non-US graduates (see following paragraph), if the distribution across year, gender and school are similar for our respondents and the entire population of US veterinary graduates as documented in IPEDS, then it is likely our respondents accurately represent the entire population for those characteristics.

 

If it doesn’t, we will know right away how our sample population differs- which tells us which veterinarians the conclusions do- or don’t- apply to.

 

Another check to see how closely our respondent population represents the total population is by comparing the number of respondents by graduation year to the number passing the NAVLE each year.  While IPEDS tracks gender and school in addition to number, it only does so for US programs, yet we know there are graduates of non-US schools in the US veterinary workforce, and in our respondents.  Those non-US grads are, however, included in the number that passed the NAVLE- or they wouldn’t be eligible to practice in the US.  So by comparing the IPEDS and respondent subsets for each graduation year to the number passing NAVLE for that year we will see if the number of non-US graduates in our sample is proportional to that in the total population.

 

Likewise, we can compare the geographic distribution of respondents to that of the US population.  It is intuitive that service providers like veterinarians will locate where there are clients. (Note to schools and state legislators: this is why more schools and bigger classes won’t fill gaps in access to veterinary care!)  If survey respondents are widely and proportionally distributed geographically, to the same extent as the population, then we very likely have a sample that accurately reflects the entire veterinary population.

 
Screen Shot 2015-04-13 at 3.26.13 AM.png 

The map above shows that respondents are widely geographically distributed, in all types of areas- agricultural, rural, suburban, urban and metro- and that respondents are distributed in proportion to population. In other words, the vets are where the clients are.

 

As much as we are located where our clients are, we are located where our clients have money to pay us.  The map below shows the respondent pool mapped against median household income.

Screen Shot 2015-04-14 at 11.07.16 AM.png

 

 

Veterinary medicine is a first party payer healthcare system, folks. Most veterinarians are in private practice, which means we charge for the care we provide; if we didn’t, we wouldn’t be able to continue providing it. So, predictably, we are more likely to be located where median income is higher rather than lower. How low is too low for median income to support a private practice?  How many practices per thousand households does each median income support?

 

Debt

So, what did we learn about the effect of debt on our choices to buy a house or a practice, to get married and have kids?

Likelihood of vet life events.png

THIS IS A TERRIBLE GRAPH!  The debt levels at graduation in this graph are NOT adjusted for inflation; in other words someone who graduated with $50,000 in debt in 1999 looks the same as someone graduating with $50,000 in debt in 2014. PEOPLE: IF THE FIGURES AREN’T ADJUSTED FOR INFLATION, IGNORE THEM.  That ain’t PhD level economics; that’s common sense.  I’ve also made ‘yes’ have a lower numerical value than ‘no’ which is counter-intuitive. This illustrates that you gotta pay attention to how graphs are actually constructed. If you don’t understand what the graph is telling you, it isn’t good research, regardless of how much money somebody spent.

 

 

While we know this representation isn’t true, correcting for it is difficult, because we collected the data in brackets instead of as exact figures.  It was a compromise between making the survey more tedious for respondents and making the data more useful. Similarly, showing the corrected data will be a compromise between accuracy and understandability. As you’ll see we ended up correcting the upper and lower limits of each bin for inflation for each respondent and then plotting those two sets of data points as separate items.

 

Income.  Sort of.

We’re also going to use the geographic location we collected, which included both childhood and current zip codes, as a proxy for income data.  

 

In most surveys that attempt to get at how much we make, there are questions like… “How much do you make?”  

 

Well, that methodology has inherent flaws. It depends on respondents answering honestly and accurately- but we know they don’t do either; answers are subject to significant response and reporting bias.  It also depends on respondents all interpreting the question the same way.  Is the question asking for gross income? AGI?  Household or individual income? What about benefits and non-salary compensation? Production bonuses?

 

And in the end, even if everyone interpreted it the same way, remembered exactly how much they made AND were perfectly honest about it… you’d still only know income and income does not correlate with purchasing power or standard of living.  A six figure income for an associate  working two jobs as a recent grad with six figure educational debt and three kids under seven is very different than the six figure income of a sixty year old with a paid off practice who wrote the last college tuition check last spring. Yet their incomes look the same on paper.

 

We asked no direct income questions, relying instead on an indirect, objective and relevant metric: zip code. As shown by the field of segmentation analysis over the past thirty years, few people live in zip codes where the average household is dramatically different than their own. By finding out the median household income correlated with a respondent’s zip code,  we get a more accurate idea of how the respondents are doing as opposed to just knowing how much they are they making.  

 

We obtained median household incomes for respondents’ zip codes.  For gender at least, this turns out to give a very different picture than we have been led to expect.

image (1).png

Using this method shows that female veterinarians are actually more likely to live in areas with slightly higher mean household incomes than males; while both genders show outliers in higher income areas, males show significantly more such outliers.

 

 

female

male

mean

$79,403

$76,845

std dev

$30,275

$35,863

median

$72,496

$66,486

skew

2.0

2.9

kurtosis

6.5

11.3

# resps

538

203

 

Looking at this chart, yes, female is normally distributed with a kurtosis <8 and a skew <3 but male isn't- kurtosis of 11.3 shows a right tailed distribution ie there are more outliers pulling up the average and the standard deviation. In other words, the shape of our curves is significantly different. BUT THE BIG THING HERE is that both genders enjoy the same standard of living (and probably household purchasing power)... despite women being paid less than men in every survey we’ve ever seen.  Simply asking people how much they make doesn’t tell the whole story.

Comparing our childhood geographic distribution over time may hold additional insight into the profession’s current lack of cohesion.  We used to be a much more homogenous group. Did those who entered the profession forty, thirty, twenty years ago grow up in places that were more alike than the range of places we come from these days?  Perhaps some of our discontent arises because a far smaller proportion of us can achieve the same increase in economic class over a career as previously? To answer that last one we can compare relative childhood economic status to relative adult economic status- however we’ll probably have to use median home price rather than median household income, as the latter measure isn’t readily available very far back.

 


 


That ought to wet your whistle.  Stay tuned for more, as we elaborate on these data points and bring new findings to light.  And veterinarians wishing to join the conversation, message us to join Vets4Change on Facebook.

Do you like this post?

Showing 1 reaction


commented 2017-10-31 07:37:00 -0400 · Flag
I am truly delighted to glance at this blog posts which includes plenty of valuable information https://www.essayprime.co.uk/write-my-essay, thanks for providing such information clearly stumbled upon your weblog and favored to mention that I’ve truely loved reading your blog posts. anyhow I’ll be subscribing in your feed and that i wish you submit once more quickly. Please keeps it top posting! thanks you a lot, I recognize your work.
Just vet data
Uphold the standards of veterinary medicine