Important Considerations Before Using Generative AI in Your Studies

The ‘Black Box’ Problem

Large language models are based on complex neural network architectures comprising billions of parameters. While this allows them to generate remarkably human-like text, the downside is that their decision-making process is very opaque and unintuitive.

Some key challenges around the interpretability of these models include:

  • It is nearly impossible to intuitively understand the roles of different parameters and model components that influence the text generation. The process is inherently statistical and probabilistic.
  • The models make a sequence of token-by-token decisions while generating text, predicting the next token given the previous tokens and wider context. Tracing which contextual factors influenced each decision is almost impossible.
  • Small adjustments in the inputs can sometimes drastically alter model outputs, and explaining those changes poses mathematical and observational challenges.
  • There is no easy way to ask the model “why” it made certain stylistic choices or “which parts” of the input prompted specific elements in the output.

In summary, while the text generated by models seems very coherent and meaningful to us, how and why that exact text was composed is cryptic to humans due to the ‘black box’ nature of the algorithms.

definition

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Are You AI Ready? Investigating AI Tools in Higher Education - Student Guide Copyright © 2024 by SATLE 'Are You AI Ready?' Project Team, University College Dublin is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.