Those who follow me, might remember a similar AI rant from a year ago, under the pseudonym “Grumpy Risk Manager”. Now I’m back, grumpier than ever, with specific examples but also ideas for solutions!
Large Language Models (LLMs) like ChatGPT are impressive in their ability to discuss generic topics in natural language. However, they struggle in specialist domains such as medicine, finance, and law. This is due to a lack of real understanding and a focus on imitation rather than intelligence.
Artificially Imitated Intelligence (AII) vs. AI
LLMs are at the peak of their hype, considered “intelligent” due to their natural language abilities. However, diving into complex domains like medicine, finance, or law reveals inconsistencies, mistakes, and “hallucinations.” These models behave like pupils with vast dictionaries, pretending to know the answers without true understanding. Their reasoning ability is limited.
The so-called Artificial Intelligence (AI) is very often Artificial Imitation of Intelligence (AII). This is particularly bad in specialist domains like medicine or finance, since a mistake there can lead to human harm and financial losses.
Regulatory Capital Requirements: An Example
Consider the domain of financial risk. Banks are subject to regulatory capital requirements, a buffer against financial losses. These requirements ensure banks can absorb losses, reducing bankruptcy and financial crisis likelihood.
- Banks are subject to regulatory Capital requirements.
- Capital can be considered a buffer which absorbs financial losses.
- The requirements to hold Capital, ensures that banks have sufficient capability to absorb losses reducing the likelihood of bankruptcy and financial crisis.
- The rules for setting the requirements in 1. are based on risk-proportionality principles:
→ the riskier the business that banks undertake
→ higher risk-weights
→ higher capital requirements
→ larger loss buffer
→ stable bank - The degree of riskiness in 4. is often measured in the form of credit rating of the firms with which the bank does business.
- Credit ratings come from different agencies and in different formats.
- In order to standardise the ratings, regulators have created mapping rules from every rating format to the standardised Credit Quality Step (CQS) in the range of 1 to 6.
- Then the regulatory rules for determining the risk-weights in 4. are based on the CQS.
- The rules in 8. for European banks are set in the Capital Requirements Regulation (CRR).
Focusing on the core rules: rating → CQS → risk-weight → capital requirement. CRR article 122 shows a mapping table CQS → risk-weight for corporate exposures.
ChatGPT’s Struggle with CRR
When asked about the risk-weight of a corporate exposure with a credit quality step of 5 according to CRR, ChatGPT 3.5 provided an incorrect answer:
In the Capital Requirements Regulation (CRR), the risk-weight of an exposure to a corporate entity depends on its credit quality step. In the context of credit quality steps, a credit quality step of 5 typically represents the highest credit quality, indicating a very low risk of default.
In the standardised approach outlined in the CRR, exposures to corporates are assigned risk weights based on their credit ratings or credit quality steps. The risk weights for exposures to corporates range from 20% to 150%, depending on the credit quality step and whether the exposure is short-term or long-term.
Since a credit quality step of 5 signifies a very low risk, the corresponding risk weight assigned to such an exposure would be one of the lowest. In this case, for a corporate with a credit quality step of 5, the risk weight would likely be 20% or similar, indicating a low capital requirement for banks holding such exposures.
The factual mistakes include inverting the CQS to risk-weight relationship, and the confident claim of a 20% risk-weight instead of 150%.
Ontologies and Knowledge Graphs: A Solution
Ontologies offer a way to represent knowledge in a particular domain. They provide a uniform representation of structure, data, and logic. This includes classes with generic properties, instances of individual items, and logic that makes the data “smart.”
- Data dictionary: table with field names and metadata attributes
- Taxonomy: table/s with added nesting of data types and sub-types in terms of relationships (e.g. Pigeon Bird)
- Ontology: Multidimensional taxonomies with more than one type of relationships (e.g. Birds Seeds) “the unholy marriage of a taxonomy with object oriented programming” (Kurt Cagle, 2017)
Knowledge Graphs (KGs) are a more generic term for storing data in graph format. With the rise of LLMs, KGs are seen as a way to resolve their weaknesses in specialist domains.
Benefits of Using Ontologies
- Uniform Representation: of structure, data, and logic.
- Explainability and Lineage: Graphs can be queried to evaluate rules, tracing the chain of thought and avoiding the ML black box problem.
- Reasoning and Inference: New knowledge can be derived from existing data via “What if” questions.
- Consistency: Conflicting rules are automatically identified.
- Interoperability and Scalability: Knowledge can focus on a specific domain or connect different domains.
Why Use Ontologies in Banking?
- Specialisation: LLMs struggle in finance, where mistakes are costly.
- Audit Trail: Regulators and auditors expect a continuous audit trail.
- Explainability: Professionals need a good understanding of bank operations.
- Objectivity and Standardisation: Lack of interpretation ensures a level playing field.
Imagine regulations like CRR provided as ontologies, enabling banks to import and extend with their own data. This facilitates automated financial planning and optimized decision-making.
Conclusion: The Future of AI in Specialist Domains
Enterprises can develop ontologies to encode, automate, and reuse knowledge. This enforces a common dictionary and reduces wasted effort. The same concept can be applied to any specialist area where text association is insufficient and LLMs struggle. The future of AI involves structured data, or knowledge, traded between companies.
If data is nowadays deemed as valuable as gold, I believe that the real diamond is structured data, that we can call knowledge. Such knowledge in the form of ontologies and knowledge graphs can also be traded between companies just like data is traded now for marketing purposes. Who knows, maybe this will evolve into a pay-per-node business model, where expertise in the form of smart data can be sold as a product or service.
LLMs are already impacting profits. Klarna expects a $40 million improvement due to ChatGPT handling customer service chats.
Future Ideas
Future ideas include using plain English to query ontologies, and developing a more robust way of generating ontologies. A proof of concept that achieves these can be called true AI.
Leave a Reply