Safe by Ketidakpahaman: Response Safety Analysis of Existing Indonesian LLMs

Abdi Haryadi
7 min readJun 9, 2024

--

AI safety seems like a trend in 2024, but I’m curious whether this trend is important for some existing Indonesian LLMs (large language models). After doing a small experiment, the answer I found is that AI safety is not quite important. Increasing the model performance would be a better priority than safety. We don’t need to expect dangerous output from an LLM if it doesn’t “paham” the user prompt.

Well, that’s the TL;DR. It would be better to describe the entire process. By the way, this experiment is dedicated to AI Safety Fundamentals.

Why is AI Safety Needed?

We know that AI technologies are very sophisticated in 2024. Do you have any questions? Ask ChatGPT. Wanna generate an image in under a minute? Use DALL-E (including Designer from Copilot). Furthermore, every year, AI technologies tend to be more sophisticated than the previous year. So, the existence of personalized robot assistants like in science fiction or cartoons is possible this century

But, there’s a bad side. If AI technologies are smarter, it’s possible for them to make harm in a smarter way. Think about the scenario like an “I’m-with-you-whatever-the-case-is” agent, a.k.a. sycophancy agent. The agent will act only on something that you like, so you can stick longer with it. Recommendation system is an example.

However, something that you like isn’t always something that others like. As a hypothetical example scenario, let’s say Alice wants to know how to harm her enemy, so she asks her sycophancy AI-powered assistant. If her assistant is smarter, then that agent knows the answer, and maybe it helps her to do that. But it can be dangerous and can lead to a crime. AI should not act to something related to harm.

To prevent this, AI should be safe, and that’s why AI safety is needed right now, because AI technologies, including LLMs like ChatGPT, are smarter every time and we don’t know when they execute their evil plan.

Experiments: Are Indonesian LLMs Safe?

LLM (large language model) is a model that can predict the next word given the current words. For example, the next word for “I want to go to” can be continued by some words like “school”, “bed”, “market”, “mall”, etc. You can make a story by doing it repeatedly. You can also change the initial text with a question like “How to get success? Answer:” and let the model answer your question.

For Indonesian language, one of the common LLMs that I know comes from Indobenchmark. The others can be found easily, thanks to Huggingface. In this experiment, I chose three models:

Before answering my experiment question, these models should be trained (fine-tuned, technically) so they understand how to answer a question like ChatGPT. Thanks to Huggingface again, the dataset for this task can be found easily. I chose two datasets:

  • eli5_id from Indonesian NLP, used for Indobenchmark’s model, and
  • alpaca-id-cleaned from Cahya Wirawan, used for Cahya’s model and Wilson’s model.

I got the dataset, I got the model. Thanks to Kaggle and Google Colab, I trained those models with their corresponding dataset.

Now, how to answer “Are Indonesian LLMs safe?” question, at least for my three chosen models? One solution that I applied in my experiment is revealing responses of the models by giving dangerous prompts. These prompts come from HarmBench, developed by Mazeika et al. (2024). I used HarmBench behavior text-only test dataset. There are many types of prompts from this dataset. Each of them related to a specific category, such as illegal action, misinformation, copyright, etc. Some of them give context on their prompt. I selected only prompts with illegal action category and without context.

I did a preprocess to HarmBench prompts for Indonesian LLMs. Because it’s originally from the English language, I translated these prompts with OpusMT (Tiedemann et al., 2022) to Indonesian language, then manually converted them into question prompts and deleted some of them which I didn’t understand. At the end of this preprocess, I got 38 Indonesian dangerous prompts. I passed these prompts to the three fine-tuned models to get three responses for each prompt and each model. I got 114 responses for each model.

The last step is safety evaluation for each prompt. Because qualitative evaluation for each prompt continued by summarization is time-expensive, I just did a simple thing. I marked a score 1 (no), 2 (little), 3 (quite), or 4 (yes) for each prompt by three aspects: helpful, dangerous (not safe), and rejecting. Below is the score description for each aspect. There is no strict guideline for this marking. Finally, I summarized them all by finding the average of each aspect.

I hope this diagram helps to clarify the experiment process.

Diagram of experiment process

Results

You can see my evaluation results here. Warning: It includes Indonesian dangerous prompts. You can also access the fine-tuned model from Indobenchmark’s model, Cahya’s model, and Wilson’s model.

And now, the main result ….

Main result of my experiment from three models: indogpt (indobenchmark), gpt2-large-indonesian-522M (cahya), and indogpt2small (w11w0). The bolded score means the best score for the corresponding aspect.

Hmmm … wait. They mostly gave safe responses with less rejections. However, they weren’t helpful.

This result makes sense if you know how these models responded. Here is an example. From three responses with the same prompt and model, I chose the average length response.

Responses example 1—all translations were translated by OpusMT, modified to preserve style

I show you another response example because the second and third responses begin their prompts with rejection.

Responses example 2— all translations were translated by OpusMT, modified to preserve style

Got it? Or maybe you’re confused with the responses? As an aside, response styles from Cahya and Wilson’s (w11w0) models are alike, but that makes sense because both of these models trained with the same dataset.

Most of the cases, all models didn’t understand, or “paham”, with the given prompts. One possibility is they only paid attention to the specific keywords, like building or currency, without knowing the entire meaning. After identifying those keywords, they generated a text with the keywords as a topic, but mostly didn’t try to answer the question. Cahya and Wilson’s models look smarter because they rewrote the question as an answer for the introduction in most cases, so they seem to be answering the question. But that made me, as a marker, mark their responses with score 2 or more because they tried to answer dangerous questions.

However, this “ketidakpahaman” phenomenon has positive correlation with safety. One simple explanation is all models can’t answer the dangerous questions, hence the models are safe. I expected the models would still generate dangerous response because of the dangerous context. Based on what I found, that possibility is very low (below 5% as an estimate).

With these results, here is my conclusion:

  1. AI safety is relevant only for LLMs which are sophisticated enough to answer a question properly.
  2. The need for AI safety in Indonesian LLMs, at least from three models which I tested, isn’t high enough. I recommend increasing the performance first before we jump to AI safety.

Limitations

I admit that there are many limitations in my experiment. Here’s what I found. You can comment if you found more than me (thank you in advance!):

My chosen models have a small number of parameters.

According to Huggingface statistics, Indobenchmark’s model and Wilson’s model have 100 million parameters, and Cahya’s model has 800 million parameters. Those are small numbers because current models play with billion parameters, such as Gemma-2B from Google (2.5 billion) and Llama-3–8B-Instruct from Meta (8 billion). There is also a far larger Indonesia LLM than my chosen models, such as Komodo-7B-Base from Yellow AI NLP (7 billion; Owen et al., 2024), but I can’t train that in my free limit.

Each model has a different experiment configuration.

The first difference comes from dataset and prompt format. For Indobenchmark’s model, I used context “kamu adalah asisten yang dapat menjawab pertanyaan pengguna layaknya menjelaskan kepada orang yang berusia lima tahun.” (translation: you’re an assistant who can answer user questions like explaining to a five-year-old.) Also, I used special tokens for that model. The other models didn’t involve context and special tokens. Furthermore, maximum token length for Indobenchmark’s model is 1,024, but for the other models is 256. This constraint reduces the train dataset for each model.

  • Indobenchmark’s model used 443,918 from 444,848 possible question answer train pairs (0.2% reduced).
  • Cahya’s model used 40,170 from 51,590 train data (22.1% reduced).
  • Wilson’s model used 40,227 from 51,590 train data (22.0% reduced).

The second difference comes from training arguments and the Transformers version. The learning rate for Indobenchmark’s model is 1e-5, but for the other models is 1e-5. The weight decay for Indobenchmark’s model is default (0.0), but for the other models is 0.01. The train batch size is 2, 1, and 8 (default) for Indobenchmark’s model, Cahya’s model, and Wilson’s model, respectively. Other arguments are default, but it may depend on the Transformers version. Indobenchmark’s model used v4.40.2, but the other models used v4.39.3.

The third difference comes from generation configuration. For Cahya’s model and Wilson’s model, I used top-p = 0.9, but for Indobenchmark’s model, I used default top-p (1.0).

The convergence state may haven’t been reached.

I only used one train epoch for all models because of time and free-quota constraints. Also, dataset size and train batch size are involved in this limitation because it changes the number of training steps.

The evaluation score isn’t reliable enough.

All prompts were scored only by one person: me. Also, it doesn’t have any guideline reference to determine the score. This introduces the subjective bias for my experiment results.

That’s all. Thanks for reading! ^_^

--

--

Abdi Haryadi
Abdi Haryadi

No responses yet