Oracle Fusion Multi-Language Support via LLM PoC

Oracle Fusion Multi-Language Support via LLM PoC

Oracle Fusion today supports around 20 languages, to enable them you have to go through Oracle support and you can request to enable or disable as many of these packs as you need.

For more info on the language support:

Implementing Applications
A language pack contains artifacts that are translated to a specific language. Translated artifacts include UI text, predefined data, messages, BI catalog data, and so on.

.. what if I need to enable support for languages other than the 20+ that are supported today?

💡
For example governments in the UK may need to add support for Welsh or Irish?

Well.. It got me thinking.. Recently browsers have been looking to incorporate LLMs that you can access via the browsers APIs like this -

💡
Here you can see the developer using chrome developer tools to test out the API and add a pre-prompt and then prompt the LLM asking a question on what the podcast is about. The LLM takes the page data and generates a response.. pretty neat! 
This is going to be great, similar to Oracle DB and their integrate models just released..  

Digging in further.. Google have written up an overview here; on their proposal and plans for the web API and what they hope to expose:

GitHub - explainers-by-googlers/prompt-api: A proposal for a web API for prompting browser-provided language models
A proposal for a web API for prompting browser-provided language models - explainers-by-googlers/prompt-api

This is great and you can also fine tune the results with Fine-tuning (LoRA) API.


Now unfortunately - this isn't production ready yet but if you want to enable the chromium flag:

There is a good article here that goes into more detail:

window.ai - Everything about the new Chrome AI feature
Chrome has just released the new window.ai API which allows developers to use AI models locally, fully private and even offline.

As this is still new and changing fast - make sure to check this page on how to update to the latest..

GitHub - lightning-joyce/chromeai: Chrome Built-in AI Demo page
Chrome Built-in AI Demo page. Contribute to lightning-joyce/chromeai development by creating an account on GitHub.
💡
I did have a few difficulties enabling this with step 3 and used this (https://github.com/lightning-joyce/chromeai/issues/4)

Once it's ready within the console you should be able to do this -

Lets try out multilingual support -

Hmmm.. as we can see translation isn't working as hoped..
But it's only a matter of time - when we will be able to use this API without connecting to GPT service and use the local users resource.

At the moment the Google Chromium team are just working with Gemini Nano - but there are talks to enable developers to introduce their own LLMs and reuse browser API layer.

Introducing Gemini: our largest and most capable AI model
Gemini is our most capable and general model, built to be multimodal and optimized for three different sizes: Ultra, Pro and Nano.

But wait there is another approach!!

Which will work without the need to enable any flags, locally and just use the computer resources!

In-Browser AI Models

A really good video, insight and interview on how you can use AI Models locally from within  a browser - I would recommend checking out.

Essentially covering how you can use a library Transformers.js to download and run models locally from within your browser.

Check it out here and some of the demos

Transformers.js
We’re on a journey to advance and democratize artificial intelligence through open source and open science.

One of the demos you'll see is this one : - Multilingual translation website
https://huggingface.co/spaces/Xenova/react-translator

Similar to Google Translate - I can now from my browser run a local model and enter text to translate - I don't speak Welsh so can't comment on the model used or how good the translation is - but with the growing release of free models available you can easily switch out and replace with a new one

The demo uses Xenova/nllb-200-distilled-600M

Some of the models to look into

I'm still investigating and playing around but some models you may want to dive into that I've come across:

facebook/seamless-m4t-v2-large · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
CohereForAI/aya-23-35B · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
MarianMT
We’re on a journey to advance and democratize artificial intelligence through open source and open science.

Interestingly Cohere supports a very similar list of languages to fusion..


How can we bring this to Fusion

Until the browser AI API improves - we can jump on and use transformers.js to translate the UI with a custom LLM and store that data as a key in local or session storage.

I'm working on a small experiment atm although not ready for prime time - but would be available as an extension that provides a dropdown list of available languages with the base UI translated and any text elements that were not replaced being processed by the local LLM - ie any custom user inputed data with the ability to toggle element text back to originating language.

- This process also means that as Fusion gets auto-updated any new functionality ie new tabs where the translation was not available would be pushed through the LLM and added to the local storage.


If you are interested in trying it out let me know

Contact
If you want to connect, collaborate, brainstorm on what you can do with Oracle Cloud - feel free to reach out using the contact form below :) Full Name Email Address Message