Mitigating the risk of your client reporting implementation

There are a number of perils for any vendor going into a client reporting project on behalf of an asset management firm. One such problem area is what some in the industry call ‘data interfacing’ – handling data from external sources such as benchmark data vendors, third party administrators, other vendor systems (e.g. risk or performance applications) or various inhouse systems. There are three key reasons why this issue of interfacing with data sources is so important to asset managers.

First of all, data accuracy and data currency is of primary concern. Data is embedded within almost every risk factor facing an asset manager. If an asset manager releases data that is out of date or inaccurate, then risks will rapidly escalate.

Secondly, experience will tell an asset manager that the developers and technologists they have worked with always struggle with delivering the level of sophistication and flexibility that the asset manager requires in terms of interfacing with external data sources.

The third reason why asset managers worry about data interfacing is that they have never seen the problem truly solved. Most reporting platforms do not deliver on the promise that it can deal painlessly and smoothly with data from any source. As a consequence, many internal IT teams have an inbuilt distrust of what vendors in this space promise.

Vendors therefore have to know and recognise that the single most important element in any reporting project is the data: where is it, how can I access the data and when can I make use of it in a report. Inaccurate data carries not only an immense operational risk in terms of activities such as asset allocation, but also a significant reputational risk if the numbers being reported are found to be incorrect by the client. The regulatory risk of fines and judgements is also formidable.

Data interfacing can often require some bespoke coding on the part of the software firm. Asset managers in such situations are sometimes told that ‘it can’t be done’, when the real issue is the way the software has been designed in the first place. The ability to leverage customised plug-ins in the area of retrieving external data without destabilising the rest of the code is the key. Often, software firms need to go deep inside their code base just to accommodate an unusual external data source, but this should not be necessary. The original core code can remain the same; the plug-in is where the only new coding is needed.

Virtually every external data sourcing issue can be overcome by this approach of data plug-ins. Even if the asset manager requests an unusual form of output which no firm has ever attempted before, vendors can take exactly the same approach: a plug-in to deal with the output. That seldom happens in practice; more often the issue is concerned with accessing data and bringing it into the reporting realm as opposed to generating outputs, but the principle remains the same.

With this methodology, asset managers can substantially reduce their project risk at implementation time. Indeed, when an asset manager is working with a software vendor for the first time, the number one concern is: ‘Will this actually work, or will we be here in two years’ time, still trying to implement the solution?’ (sadly, not an uncommon scenario in the client reporting world). Yet the effective use of plug-ins can reduce the time involved in a client reporting system deployment by as much as 50%.

The only factor that should result in a protracted implementation project for an asset manager is volume of work, not complexity. The software vendor should be handling the complexity. If an asset manager has three unusual data interfaces, that should be a simple problem to contain. If the asset manager has 300 complex data interfaces, then the vendor has to solve each one (perhaps not 300 times but certainly more than three). A smart vendor will be looking for repeatable patterns, finding commonality in the problems and thus reducing the amount of unique coding that needs to be created.

Very often, the asset manager’s task is complicated by the need to handle incoming data from all of its affiliates, each of whom might see the world differently. For example, in order to output a report on the top ten securities held, Affiliate A might say: ‘We want to look at Central America, but we don’t regard Bahamas as part of Central America but as part of North America’. That situation is unique to that affiliate. The next affiliate might say: ‘We don’t have Central America, we just have north and south’. The ability to accommodate those different perspectives with exactly the same data is an issue that the vendor should be able to resolve.

Asset managers will often be told that if they wish to implement a new data management system while introducing a new client reporting application, they must first get the interfacing with their external data sources in order. There will be clear separation between the two projects, thus extending the overall project time. I would argue that the asset manager can commence its reporting project immediately through the efficient use of plug-ins.

Vendors should not need to rewrite entire sections of their code for each implementation, while asset managers should not have to deal with the delays that this process inevitably causes. Efficient data interfacing does not require new coding every time, but neither should it rely on extensive manual processing, which is both expensive and fraught with operational risk.

(Factbook has been using plug-in technology since the first release of its .Net product in 2009).

Abbey Shasore, CEO, Factbook

@FactbookCompany

www.factbook.co.uk

Subscribe to Factbook News
Your name
E-mail
Check here if you accept terms