Unlike conventional AI systems that primarily classify or predict data Generative AI models learn the patterns and structure of the input training data, and then generate new content that has similar characteristics to the training data. However, responses are based on inferences about language patterns rather than what is known to be true or arithmetically correct.
Essentially, LLMs have two central abilities:
- Taking a question and working out what patterns need to be matched to answer the question from a vast sea of data; and
- Take a vast sea of data and reverse the pattern-matching process to become a pattern-creation process.
Both functions are statistical, so there’s a certain chance the engine will not correctly understand any question. There is a separate probability that its response will be fictitious, a hallucination.
[Generative Al] is only processing patterns to produce coherent and contextually relevant text. There is no thinking or reasoning
Given the scale of data on which the LLM has been trained and the fine-tuning it receives, it can seem like it knows a lot. However, the reality is that it is not truly “intelligent”. It is only processing patterns to produce coherent and contextually relevant text. There is no thinking or reasoning.
Even with well-crafted prompts, answers can be wrong, and biased and include completely fictitious information and data, with sometimes harmful or offensive content. However, the potential benefits of generative AI easily outweigh such shortcomings, which major AI market makers like OpenAI are consciously working to improve.
Source: law.asia
Leave a Reply