We were recently contacted by a financial adviser (let’s call him Dave). He wanted to get our opinion about whether to accept an invitation to participate in a online adviser ratings and review service.
The service provider had urged him to ‘claim’ his profile (already added to the site without prior permission) in return for removing the usual obligation to pay a subscription.
The invitation claimed there had been an upsurge in site traffic by consumers (although no evidence was provided) due to market volatility, generating interest in getting financial advice.
It went on to imply the positive impact this could have on Dave’s business if he participated. The message was simple. More site visitors looking for advice = more prospects for Dave to offer his services to and the opportunity to demonstrate validation by his existing clients.
In short, Dave was being encouraged to ‘claim’ his profile and then get his existing clients to review him on their site in return for the possibility of new clients and no subscription required. So far, looks like a good deal for Dave.
But it raised a few questions.
• If Dave didn’t have to pay for the service, how did the site generate revenue. What was the business model and how could this impact Dave?
• Was the basis for the client reviews and ratings credible and reflective of adviser performance across their client portfolio?
• Was there any evidence that the service was already being ‘gamed’ by advisers to achieve distorted outcomes (as had been evidenced on other types of ratings/review sites) and what reputational impact could this have on Dave should he participate?
We’re glad we took the time to investigate further. In summary, we advised Dave to think very carefully about taking up the offer for a few key reasons:
• There were indications of ‘gaming’ of the ratings due to the large proportion of 5-star ratings attributed to advisers with just a single client review. This indicated widespread adviser conduct which, today, may not align well with legal and ethical obligations imposed by FASEA. Should Dave participate, he could be tainted by future allegations of ‘gaming’ – guilt by association.
• There was evidence to suggest the service provider is encouraging its use in order to monetise the data that advisers inherently help them generate. Under this scenario, Dave and every other adviser using the service would be the ‘product’.
• The rating itself has serious shortcomings in what it measures and, therefore, places a constraint on whether consumers can make the most confident choice about which adviser has the better performance across all relevant service variables. It also doesn’t help advisers improve performance.
If you’re interested in the process we followed, what we discovered, and the reasons for our advice to Dave, please read on.
As a first step, we went to the site and examined the searchable adviser profiles for each of the State and Territory capital cities (i.e. excluding individual metropolitan suburbs and regional areas). The rationale was that the largest single groupings of advisers are located in the capital cities, so this was a good representation across the country and ensured the resulting analysis was based on a suitably large sample size. Each search on the site, based on the postcode for the capital city, yielded the number of advisers located in that postcode and presented the profiles of 200 advisers except for Hobart, Canberra and Darwin.
Sample size is an important consideration for this type of analysis. It is the act of choosing the number of observations to include in a statistical sample because inferences can then be confidently made about the wider population (in this case all financial advisers in Australia).
Assuming a total population of 25,000 advisers in Australia (but probably somewhat less now in reality) the required sample size was 379. This meant a total of 379 observations had to be made of the adviser population listed on the site to make inferences with 95% confidence in the results about the total population. Our sample size was 8,504 advisers drawn from each of the state capital city postcodes, so the analysis was completed with the surety that it covered a significantly larger sample (x22.4) than required to ensure a high level of confidence in the results.
The next step was to identify how many of the adviser profiles in that sample had at least 1 client review and what proportion had no reviews. Of the total sample of advisers identified (8,504), only 7.21% had at least 1 client review, meaning almost 93% of all advisers on the site had no reviews. The relative importance of client reviews appears to be accentuated in the smaller capital cities, perhaps reflecting a need to cast a wider net to attract new clients rather than rely predominantly on referrals from existing clients.
Next, an analysis of those advisers with reviews (7.21% of the total sample) was completed to identify the proportions based on the number of reviews. Almost 95% of advisers had 30 or fewer client reviews. More than 62% had 5 or less and, worryingly, almost 40% (38.17%) had only 1 review on their profile. Here is a breakdown, by capital city, of those advisers with only 1 review.
Lastly, the proportion of 5-star ratings attributed to the adviser cohort with 5 or less total reviews (62.48% of all advisers with reviews) was analysed. It turned out that >69% of that cohort had only 1 review. In practice, this meant that >69% of advisers who had a single review were asking consumers to believe they performed at a 5-star rating level. Why is this significant? These days, consumers tend to take a look at online reviews or ratings for products or services they’re interested in, including financial advice. It seems like a great way to get an unfiltered view on the suitability of what they’re looking for, provided the data is authentic and is representative of performance over a sufficiently large sample of existing clients.
Average user ratings have become a significant factor in consumer decision-making, so it’s critical the rating system is solid and has efficacy, and those being rated behave ethically when using such services (even if it is solely for their own benefit).
In this instance, one is left to seriously question the motives and credibility of the individual advisers who rely on a single rating to promote their advice performance. It appears that 162 advisers of the 613 with at least 1 review were only prepared to do the bare minimum to promote themselves on the site (and by extension on a well-known search engine) as a 5-star adviser. Interestingly, the proportion of 5-star ratings declines as the total number of ratings increase, thereby indicating a more realistic overall performance.
Once again, it’s important to remember the concept of sample size.
In Australia, the general view is that each full-time equivalent adviser is capable of servicing about 120 clients (more if the scope of advice is narrow). So, in order to determine how many individual reviews an adviser would need to ensure the resulting ratings are representative of the views of their client base, the sample size would need to be 91 (at a confidence level of 95%). This means each adviser with a client base of 120 would need to obtain reviews from 91 of their clients in order to infer the accuracy of their ratings at a 95% confidence level.
Only 6 advisers (0.07%) out of a total of 8,504 advisers in our sample had 91 or more reviews, thereby calling into question the value of the service to consumers and the motives of participating advisers.
Based on this analysis, it’s clear the service provider hadn’t been very successful in convincing advisers to ‘claim’ their profiles yet (only 7% had any reviews), so the decision to abandon the paid subscription model begins to make more sense. It appears to be a sacrifice to encourage more advisers onto the adviser ratings service and create additional data to support a more lucrative business model.
Perhaps that business model involves monetising the adviser data by selling it to third parties like product manufacturers in the wealth management industry. Someone once said that if you aren’t paying for the product you ARE the product – this should set off alarm bells for Dave (and any other adviser using the service).
We were then in a heightened state of concern for Dave if he went ahead with the offer. After digging a little further, some promotional material from 2019 came to light that was used by the service provider to support fund-raising efforts. A significant element of its strategy involved monetising the data network effect arising from the activity of advisers using its service to generate adviser ratings data.
Network effects happen when the value of participating in a network goes up for the participants as more nodes come on to the network, or as engagement increases between existing nodes. Imagine trying to have a one-way phone conversation or only being able to call 3 people in the world and no one else; the telephone system became more valuable as more users joined the network. Network effects create lucrative opportunities to leverage data.
Putting aside the concerns already identified about the credibility of the rating numbers, a review was then completed of the underlying structure of what was being measured as part of each rating. For the rating to be meaningful and valuable to consumers, it must measure key variables that, at a minimum, drive only good consumer outcomes.
As background, our expertise lies in measuring client relationships with advisers and that has been informed by rigorous academic research and many years of practical application in the financial advice and other industries.
In this case, the 5-star client rating is calculated as an average of all client reviews given to an adviser based on 4 variables which are scored as a percentage and converted into a star rating:
• Communication (how often the client could clearly understand the advice provided and the explanation of how it fitted the client’s needs)
• Knowledge and expertise (how confident the client is that the adviser’s knowledge and experience was broad enough to permit the adviser to dispense the best advice)
• Customer care (how often the adviser was able to give the client their full care and attention when they needed help)
• Recommendation (how likely the client would be to recommend their adviser to others)
We were happy to see that the rating wasn’t only based on the last variable (as many other ratings sites are) because the likelihood to recommend question has very limited value and its premise has been widely discredited as a useful and reliable indicator of performance.
Including the other variables in the calculation goes some way to mitigating this issue but doesn’t go far enough. The remaining variables are all somewhat relevant in the delivery of advice but only go part of the way to accurately representing adviser performance and consumer outcomes. Unlike this adviser ratings site, there are many other variables that should be measured and our platform, MyNextAdvice, utilises a wider range of variables that are statistically proven to more accurately reflect performance and resulting client relationship intentions.
It should also be noted that a rating is a single metric and offers very little of value to the relevant adviser in terms of identifying specific underlying performance gaps, financial impact and actions required to improve performance. It is essentially a tool designed to attract interested consumers, just as a lure is used to attract fish, but nonetheless has to be carefully designed to catch the right fish.
The analysis had already raised a number of red flags about this adviser ratings service, but then we thought about how advisers who are using the service could possibly be putting themselves at significant professional risk by failing to ‘demonstrate, realise and promote’, the values underpinning FASEA’s Code of Ethics.
Standard 2 of the Code requires all advisers to act with integrity – this requires openness, honesty and frankness. Standard 12 requires each adviser to uphold and promote the ethical standards of the profession and hold each other accountable for the protection of the public interest.
It’s difficult to reconcile these legal obligations to the behaviour of advisers who ‘game’ the service principally to attract new clients through single reviews that inevitably result in 5-star ratings. Advisers who conduct themselves in this manner should consider if they’re acting with integrity and upholding/promoting the new ethical standards of the profession.
Additionally, businesses that offer incentives to those who write positive reviews risk misleading consumers and breaching the Competition and Consumer Act 2010. While there is no hard evidence of this occurring, based on the analysis of observed profiles, participating advisers need to be careful about the manner in which they encourage clients to complete reviews.
Note: Actual data from the adviser ratings site and other evidence has been used to draw these conclusions which are based on a point in time analysis. If you have any data/information (not personal opinions, assertions or hearsay) to rebut any of our conclusions, please get in touch.