Hold Yourself Accountable

Hold Yourself Accountable

Chart Your Path

You need this if you want to: Identify metrics, track progress, and use data to inform action and ensure accountability.

Key activities conducted by the DEI specialist may include:

  • Identify metrics (e.g., race, ethnicity, socio-economic data for applicants, staff, leadership, board, etc.; promotions and compensation; staff experience of inclusiveness of culture) to track progress and develop time-bound goals.
  • Design and implement systems and processes to collect data and analyze it in a disaggregated way.
  • Determine how the data and analysis will be shared (e.g., format, frequency, audience) and used internally and externally to inform action and create accountability.

Does this look similar to what you’re looking for? If yes, fill the form below, and we will be in touch with you.


Selecting the right consultant: Evaluating proposals

Foundations Nonprofits School Districts Schools Social Enterprise

Occasionally, clients will reach out to us with an ask. Through our platform, they have sourced proposals from several similarly well-qualified consultants, and they are having a hard time deciding between the options. Do we have any guidance for them on selecting the right consultant?

Some quick context on Catalyst:Ed for those of you who are unfamiliar with our work: We are a technology-enabled service that matches schools, school systems and education nonprofits with vetted consultants for short-term, mission-critical needs. This blog post is an effort to crystallize some of the lessons we’ve learned so far into two key strategies for education leaders looking to select consultants. Each of these strategies is based on not just our experience helping almost a hundred education organizations match up with expert consultants, but also our individual experiences as education leaders and consultants.

What’s an evaluation scorecard?

The evaluation scorecard is a simple matrix that helps you keep track of how each consultant lines up on the selection criteria that matter the most to you. There are some similarities with the evaluation scorecards that many education leaders use when hiring full-time staff. For instance, you will want to evaluate consultants’ qualifications as demonstrated by previous accomplishments. However, there are some dissimilarities as well. In particular, you will want to check for criteria – such as the proposed approach and budget – that are especially relevant for project-based work.

Our evaluation scorecard had humble beginnings. When we first launched our services, we relied on Google Drive to share consultant profiles and project proposals with clients. The earliest version of the evaluation scorecard was just a simple summary table with the names of interested consultants along with their locations, budgets and a checklist with a yes/no response to whether they met key selection criteria. Once we had a technology platform, we eliminated the summary table, since the information was relatively easy to access. Much to our surprise, our clients missed it. Some of them even re-purposed the summary table into an evaluation scorecard to help them score consultants and their proposals. We knew it was time to bring it back – but this time, we would expressly structure it as an evaluation scorecard.

Determining the evaluation criteria in your scorecard

We’ve put together a simple evaluation scorecard template that you can use and tailor to meet the needs of your project. To download our template, click the link above.

Our evaluation scorecard includes some specific criteria that we’ve found to be especially relevant to project-based work. Consider it a starter template – you’ll want to make it your own by determining which of the criteria to include in your scorecard. How do you decide which criteria to include? Ideally, your selection criteria should flow directly from your project scope. This is one of the reasons why at Catalyst:Ed, we invest time upfront developing clear and detailed project scopes for our clients that outline the why (the problem and definition of success), what (key activities and deliverables) and who (must-have and nice-to-have consultant qualifications) of the project.

Below, we provide additional detail on our included criteria and some of the factors that we encourage our clients to consider as they score consultants. Note that we do not recommend that you include all of these in the scorecard. Instead, pick and choose the ones that are most relevant to your project.

Consultant qualifications:

  • Domain area expertise: Has the consultant worked on similar projects before? Is the consultant’s work on these projects directly relevant to the work required on your project? If work samples were provided, do they match with your expectations?
  • Consulting experience: Has the consultant advised other similar organizations before? If not, is there evidence suggesting that the consultant can be successful in advising your organization? For newly minted consultants who may bring strong domain expertise, is there evidence they can transfer their expertise to new unfamiliar contexts?
  • Other project-relevant competencies: Does the consultant demonstrate other competencies critical to the success of the project (e.g., strong project management skills, written communication skills, existing networks/relationships, etc.)?
  • Bandwidth: Does the consultant have the capacity to engage in this work in the manner that it requires? How much time are they willing to dedicate to your project? What else do they have going on right now? Can they work within your deadlines and time constraints?

Proposed Approach:

  • Structure: Does the proposal include all the information you requested?
  • Content: Does the proposal present the right combination of vision and detail? Are the specific activities, deliverables, and timeline aligned with your expectations and do they seem feasible? Does the proposal include any new ideas that you hadn’t considered before?
  • Presentation: Has the consultant clearly and compellingly described how they will accomplish the work set out in the project? Do the writing and the overall presentation reflect the consultant’s comfort and expertise in the project area?

Budget:

  • Price: Is the budget within your desired range? How does it compare to budgets proposed by other consultants? If it’s more expensive, what does the additional money get you? Has the consultant provided adequate detail on how the budget was arrived at? Are all significant incidental expenses (e.g., travel) accounted for?
  • Structure: Is the proposed budget structure (fixed price or hourly) aligned to your own requirements?

Other criteria: This category of odds-and-ends captures other criteria that, depending on the specific project, might be helpful for you to consider:

  • Location: More relevant for projects where significant in-person work is required.
  • Prior experience with your organization: Relevant in cases where having organization context and prior relationships within the organization is important to the success of the project.
  • Brand: Relevant in certain kinds of high-profile projects where the involvement of a high-profile consultant or consulting firm can add to the credibility of the project.
  • Engagement Style: Relevant in projects where the organization requires a certain level of engagement from the consultant.
  • Alignment with other organizational goals and values: Examples might include mission-alignment, increasing diversity or promoting local/small businesses.

a spreadsheet with a scorecard template

Using the evaluation scorecard

Tab 2 of our downloadable worksheet (accessible at the bottom of this post) is a sample annotated and filled-in scorecard to illustrate how it works in practice. You will want to fill out the matrix for each consultant along with each criterion in your scorecard. The “Notes” section is a place for you to jot down any thoughts, concerns or follow-up questions.

Finally, we leave you with some FAQs:

Can we use this scorecard to evaluate consultant teams? Absolutely. If there are multiple people involved, we recommend that you score them on the experience and expertise they bring collectively.

Do we need to have a detailed scoring rubric as well? Only if you want to. While a detailed scoring rubric might be helpful, developing one takes time. For the purposes of selecting consultants, we’ve found that even a basic scorecard with a “yes/no” scale helps clients narrow down the consultant pool in a systematic way.

Do we have to use the evaluation scorecard to evaluate consultants? It’s entirely up to you. The scorecard is fundamentally just a tool to help you through selection. But here’s what we’ve found: when clients go about the selection process in a systematic way, they are more likely to critically review proposals, ask the right questions during the interview process and identify relevant trade-offs. They are also more likely to be comfortable about their eventual selection and – research shows – less likely to yield to their unconscious biases.

What happens next? Once you’ve reviewed each candidate’s resume, proposal, and relevant work samples and evaluated them against your decision matrix, you should be able to identify your top 3 to interview.

We’ve put together a simple evaluation scorecard template that you can use and tailor to meet the needs of your project. To download our template, click the link above.

How to not work with a “lemon”

General

A few months ago, we were in the market to buy a house. A newly constructed home in our preferred neighborhood was beyond our budget, so we focused on the older houses. Built about a century ago, these came loaded with charm. Unfortunately, as our realtor often reminded us, they could also come loaded with issues. More than outdated layouts and rusty fixtures, we needed to be concerned about potential disasters hiding behind walls, under floors and above ceilings – think flood damage, old plumbing and wiring, termites, mold. As potential buyers, our real concern was not what we knew, but what we didn’t know.

The “lemon” problem

Economist George Akerlof called this problem “the lemon problem” in his seminal paper of the 1970s (he went on to win a Nobel for his work in this area), illustrating it with the example of the used car market. In his telling, there exists a fundamental information asymmetry between sellers and buyers of used cars – the seller knows more about the real value of the car they are selling than the buyers. Mixed in with the high-quality used cars available on the market are duds or “lemons”. Buyers are aware of this, but are unable to differentiate between the two. As a result, they end up paying the same price for used cars – in effect, paying a premium if they end up getting the dud, but by the same token, getting a high-quality used car at a steal.

The result of this information gap can be costly for the market as a whole: Sellers of high quality used cars lack the incentives to participate in the market. Buyers, faced with uncertainty, are similarly reluctant to join in.

We see this dynamic play out to some extent in the market for short-term talent [1]. While there are many consultants and self-anointed experts in education, there is wide variation in the quality of their work product and practically no reliable information to help organizations evaluate expertise before hiring someone for a project. Moreover, in the absence of a systematic and unbiased mechanism to validate and showcase expertise, individuals who bring specialist skills in an area often find it challenging to signal their superior grasp of the subject.

Why it matters

For an organization, working with a “lemon” can result in time and money being invested in an initiative without commensurate results. At a systemic level, one implication is that over a period of time, market participation by experts as well as organizations in need of expertise is less than what it could be. Another implication is that in the absence of directly relevant information about expertise, buyers and sellers end up relying on proxies to help them separate the wheat from the chaff. References from people we know, the pedigree of the education institutions that the expert attended, the salience of the organizations that she has worked with, the number of Twitter followers she has – all these and more become thumb rules that we use to gauge whether someone has the chops to deliver on a project requirement [2].

These proxies may be useful. But they are incomplete. As Rick Hess points out in his thought provoking essay in EdWeek, they can “lead to our investing great authority in this or that expert for a season” or extrapolating from expertise in one area, investing an individual with “presumed expertise across a broad range of issues”. The result is often an underwhelming work product, followed closely by skepticism about the benefit of expertise in general.

Another challenge presented by the proxies is that they often play into and reinforce biases and create barriers for those who through quirks of fate or their own idiosyncratic decisions are not “in the network”. A preference for working with someone who went to a certain college or has TFA or KIPP on her resume isn’t wrong per se. However, it can prevent an organization from working with an expert who might be a better fit for the task at hand, but whose resume may lack the words it’s scanning for. It can also – unfairly – require her to vault over a higher bar to get access to the same professional opportunities.

The value of unbiased information

So what’s the solution? An organization seeking expert talent should ideally put in the legwork vetting the expertise of a consultant before signing them on. They should ideally look beyond the “old boys network” when sourcing talent. And, equally importantly, the hiring manager should ideally check for her own biases since just the act of being aware of our subconscious preferences makes it more likely that we will fairly evaluate the options in front of us. 

I say “ideally” because doing some of this requires time and effort, which is always a constraint in the sector. We therefore also need to look at systemic solutions. Regulation helps, but it’s a clunky and heavy-handed answer. Better information gathering and sharing, facilitated by technology, is a far more elegant option, since it balances out the information asymmetry between buyers and sellers. At Catalyst:ED, we are incredibly excited at the power of the data we are collecting on the expertise of individuals through our upfront vetting process as well as mid-project feedback and post-project evaluations. These “reputational tools” can help create more transparent and effective markets by providing talent with a credible way to communicate their expertise and buyers with more information to enable them to work with someone who is the right fit for their needs [3].

The information we gather doesn’t just allow us to differentiate between levels of skill, but also between types of skill. Our reference check process, for instance, gives us great insight into the skill-sets of experts, often adding nuances that the experts themselves may not be aware of. For instance, a couple of months ago, we had two expert development professionals apply to our network on the same day. Both had been recommended to our network by people whose judgment we trusted. They also both brought solid and directly relevant experiences and spoke knowledgeably and passionately about their expertise during the interviews. The reference check process revealed interesting differences though: while one’s references extolled his ability to work really well independently and turn out very high quality grant proposals almost single-handedly within superhuman timelines, the other’s references spoke glowingly about her ability to orchestrate a team effort to produce outstanding work products. Two different skill-sets that are best set up for success in two completely different situations.

How do we see this panning out? Here’s what I believe will happen if we do this well: More organizations will look for experts who are vetted and who bring a specific skill-set and mind-set as opposed to a “general purpose” expert. More projects in the sector will go off well, thanks to more informed and better matches. Pricing will show greater dispersion and will be a better reflection of the level of expertise that someone brings. And more organizations and talent will participate in the market for expertise.

———————————— 

Notes:

[1] While this may also be a problem for the talent market as a whole, it is especially so for short-term talent, since the consultant doesn’t have a lot of time before he or she has to start delivering results and the option to “develop and train” the person doesn’t usually exist. 

[2] Not surprisingly, we find through our data on consultant pricing that hourly rates tend to cluster based on seniority rather than expertise.

[3] A working paper by the Mercatus Center makes this point a lot more eruditely than I do.