2025 marks a turning point for artificial intelligence in finance. After a year of conversations dominated by consumer chatbots, the talk has now shifted to whether industrial-grade AI can withstand regulatory scrutiny, operate within risk frameworks, and survive a compliance audit. This shift is captured clearly in the HKMA GenAI Sandbox report, developed with Cyberport, which [...] The post Banks Can Slash Production Time by Up to 60% with GenAI, HKMA Report Reveals appeared first on Fintech Hong Kong.2025 marks a turning point for artificial intelligence in finance. After a year of conversations dominated by consumer chatbots, the talk has now shifted to whether industrial-grade AI can withstand regulatory scrutiny, operate within risk frameworks, and survive a compliance audit. This shift is captured clearly in the HKMA GenAI Sandbox report, developed with Cyberport, which [...] The post Banks Can Slash Production Time by Up to 60% with GenAI, HKMA Report Reveals appeared first on Fintech Hong Kong.

Banks Can Slash Production Time by Up to 60% with GenAI, HKMA Report Reveals

2025 marks a turning point for artificial intelligence in finance. After a year of conversations dominated by consumer chatbots, the talk has now shifted to whether industrial-grade AI can withstand regulatory scrutiny, operate within risk frameworks, and survive a compliance audit.

This shift is captured clearly in the HKMA GenAI Sandbox report, developed with Cyberport, which offers possibly one of the most structured and transparent examinations of how banks are pushing to deploy GenAI today.

Rather than asking what a model can generate, it focuses on whether a model can be trusted, validated, governed, and deployed safely across high-stakes workflows.

The HKMA GenAI sandbox experimented with GenAI capabilities across three critical domains: risk management, anti-fraud, and customer experience.

HKMA GenAI reportSource: HKMA

Key Considerations in Evaluating GenAI Sandbox Applications

To determine which proposals made the cut in the HKMA GenAI sandbox, four strategic factors guided the prioritisation process.

hkma genai sandbox reportSource: HKMA

First, evaluators looked for a high level of innovation, specifically targeting solutions that introduced novel ideas or methodologies capable of unlocking new digitalisation opportunities.

Next, this was balanced against the complexity of the solutions, ensuring that proposals demonstrated the technical sophistication and intricacy necessary to drive genuine advancement and add value.

Proposals needed to show a significant expected contribution to the industry, like addressing sector-wide challenges through applications that are replicable and scalable.

Finally, the selection process required strict adherence to the principle of fair use. This ensured that the sandbox’s pooled computing resources would be used in a responsible, efficient, and equitable manner.

Since the sandbox operates on shared infrastructure, this particular criterion assesses whether the solution is engineered to be efficient, ensuring it doesn’t monopolise computing power at the expense of other participants.

How Data Strategy and Preparation Affect GenAI Model Development

It goes without saying that the performance and reliability of GenAI solutions depend on the data they are trained or fine-tuned on.
Testing data are equally critical for rigorously testing the accuracy and contextual relevance of the model’s outputs.
It often can consume as much, if not more, time and resources than the fine-tuning of the model solution itself.
genai data strategy and preparationSource: HKMA

Data Collection

To ensure datasets were inclusive and dependable, HKMA GenAI sandbox participants used a mix of independent public and proprietary sources. These datasets were curated to cover critical dimensions such as product lines, languages, and customer personas.

This approach captured the website’s information completely without human intervention, and it preserved a highly accurate copy of the site’s content.

Data Pre-processing

Once datasets were collected, they required pre-processing before being used to improve data quality and protect privacy.

To maintain consistency and support efficient fine-tuning, banks carried out several data-cleansing and standardisation steps. A key objective was to remove irrelevant “noise” from text inputs and ensure more uniform model outputs.

Banks also implemented several techniques to remove or obscure sensitive information. This included data masking, tokenisation in a data-security context, synthetic data generation, and pseudonymisation.

how to generate synthetic data genaiSource: HKMA

Beyond structured data, banks also worked with large amounts of unstructured content. Document pre-processing remains important, even for multimodal LLMs.

As LLMs also have a limited context window, long documents must be split into smaller, logically structured segments. Data chunking helped in preserving context, according to the report.

Data Augmentation

To reduce the risk of overfitting and minimise hallucinations arising from limited training samples, select banks used data augmentation to expand and enrich their datasets. Another method focused on synthesising user inputs.

Banks also paid close attention to multilingual balance. Tests confirmed that balancing English, Traditional Chinese, and Simplified Chinese is critical to prevent data skew and ensure terminological accuracy across language variants.

Data Quality Check

A thorough data quality check is vital to assess the reliability of the data assets and uncover any underlying issues.

Handling missing data was another important part of the process.

This required identifying which fields were optional or frequently absent, then deciding on the most appropriate approach: whether to impute the missing values, label them as “Unknown,” or remove non-essential fields that were consistently incomplete.

The final step was comprehensive documentation, which ensured the entire process could be reproduced and provided a clear reference for future updates.

Continuous Data Updates

key optimisation strategies Source: HKMA

Maintaining model accuracy requires a continuous flow of fresh, relevant data. This would involve implementing automated or semi-automated data pipelines designed to ingest new information from approved sources.

Once collected, this incoming data was processed using the same cleansing and standardisation procedures applied during the initial pre-processing stage, ensuring consistency across the dataset.

With refreshed and expanded datasets in place, periodic model retraining would help prevent performance degradation over time and ensure the model stayed aligned with evolving language patterns, product updates, and current market conditions.

Strong Early Results, With Guardrails Still Essential

The first cohort of the HKMA GenAI Sandbox delivered clear, tangible gains for participating banks.

Fine-tuned LLMs were also able to analyse 100% of case narratives, far surpassing traditional sampling methods. User response was consistently positive as well, with 86% of GenAI outputs rated favourably and more than 70% of GenAI-generated credit assessments considered valuable references.

business outcomes of genai sandboxSource: HKMA

At the same time, the sandbox’s report indicated that risks still must be managed. Hallucinations and factual inaccuracies remain key concerns in high-stakes banking environments.

While advanced prompt-engineering techniques helped reduce these errors, banks recognised that these methods mitigate rather than eliminate the issue. For sensitive applications, additional safeguards like RAG and fine-tuning are still required to ensure accuracy and reliability.

Key priorities for the second cohort include “embedding AI governance within three lines of defence, developing ‘AI vs AI’ frameworks for proactive risk management, and implementing adaptive guardrails and self-correcting systems”.

Featured image by freepik on Freepik

The post Banks Can Slash Production Time by Up to 60% with GenAI, HKMA Report Reveals appeared first on Fintech Hong Kong.

Market Opportunity
FINANCE Logo
FINANCE Price(FINANCE)
$0.0002151
$0.0002151$0.0002151
-2.80%
USD
FINANCE (FINANCE) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Visa Expands USDC Stablecoin Settlement For US Banks

Visa Expands USDC Stablecoin Settlement For US Banks

The post Visa Expands USDC Stablecoin Settlement For US Banks appeared on BitcoinEthereumNews.com. Visa Expands USDC Stablecoin Settlement For US Banks
Share
BitcoinEthereumNews2025/12/17 15:23
Nasdaq Company Adds 7,500 BTC in Bold Treasury Move

Nasdaq Company Adds 7,500 BTC in Bold Treasury Move

The live-streaming and e-commerce company has struck a deal to acquire 7,500 BTC, instantly becoming one of the largest public […] The post Nasdaq Company Adds 7,500 BTC in Bold Treasury Move appeared first on Coindoo.
Share
Coindoo2025/09/18 02:15
North America Sees $2.3T in Crypto

North America Sees $2.3T in Crypto

The post North America Sees $2.3T in Crypto appeared on BitcoinEthereumNews.com. Key Notes North America received $2.3 trillion in crypto value between July 2024 and June 2025, representing 26% of global activity. Tokenized U.S. treasuries saw assets under management (AUM) grow from $2 billion to over $7 billion in the last twelve months. U.S.-listed Bitcoin ETFs now account for over $120 billion in AUM, signaling strong institutional demand for the asset. . North America has established itself as a major center for cryptocurrency activity, with significant transaction volumes recorded over the past year. The region’s growth highlights an increasing institutional and retail interest in digital assets, particularly within the United States. According to a new report from blockchain analytics firm Chainalysis published on September 17, North America received $2.3 trillion in cryptocurrency value between July 2024 and June 2025. This volume represents 26% of all global transaction activity during that period. The report suggests this activity was influenced by a more favorable regulatory outlook and institutional trading strategies. A peak in monthly value was recorded in December 2024, when an estimated $244 billion was transferred in a single month. ETFs and Tokenization Drive Adoption The rise of spot Bitcoin BTC $115 760 24h volatility: 0.5% Market cap: $2.30 T Vol. 24h: $43.60 B ETFs has been a significant factor in the market’s expansion. U.S.-listed Bitcoin ETFs now hold over $120 billion in assets under management (AUM), making up a large portion of the roughly $180 billion held globally. The strong demand is reflected in a recent resumption of inflows, although the products are not without their detractors, with author Robert Kiyosaki calling ETFs “for losers.” The market for tokenized real-world assets also saw notable growth. While funds holding tokenized U.S. treasuries expanded their AUM from approximately $2 billion to more than $7 billion, the trend is expanding into other asset classes.…
Share
BitcoinEthereumNews2025/09/18 02:07