The first section of the report (48 page PDF) outlines how generative AI works and has a really nice listing of the products and capabilities currently available. The second section, discussing issues, focuses mostly on social harms, such as “AI-generated content polluting the Internet” and “reducing the diversity of opinions and further marginalizing already marginalized voices.” The third section offers a step-by-step recommendation for regulation of AI and outlines responsibilities of stakeholders. Section 4 discusses steps needed to develop a policy framework on AI, including monitoring and capacity-building. It then, in the 5th section, looks at the “creative” uses of AI in research and education, insisting on a human-centered approach. Finally, in the last section, it takes a brief look at generativce AI and the future of education and research. All in all, this as good a document on the subject as any I’ve seen, and is recommended. Source: Stephen Downes Knowledge, Learning, Community
Summary of the report
- The report provides guidance on the use of generative AI (GenAI) in education and research. GenAI is a type of AI that can automatically generate content such as text, images, videos, music, and code.
- The report discusses the potential benefits and risks of GenAI for education, such as enhancing creativity, facilitating inquiry-based learning, and supporting learners with special needs, but also posing challenges for data privacy, intellectual property, assessment and diversity.
- The report proposes a human-centered approach to regulate GenAI based on UNESCO’s Recommendation on the Ethics of Artificial Intelligence. It suggests key steps and elements for governmental agencies, providers, institutional users and individual users to ensure ethical, safe, equitable, and meaningful use of GenAI in education.
- The report also offers a policy framework and a co-design approach for facilitating the creative use of GenAI in education and research. It provides examples of how GenAI can be used for research, teaching, learning, and assessment in different domains and contexts.
- The report calls for building capacity for teachers and researchers to make proper use of GenAI, developing AI competencies for learners, promoting plural opinions and expressions, testing locally relevant application models and reviewing long-term implications in an intersectoral and interdisciplinary manner.
By July 2023, some of the alternatives to ChatGPT included the following:
- A fine-tuned version of Meta’s Llama, from Stanford University.
- Aims to address LLMs’ false information, social stereotypes, and toxic language.
- An LLM from Google.
- Based on its LaMDA and PaLM 2 systems.
- Has access to the internet in real time for up-to-date information.
- Made by Writesonic.
- Builds on ChatGPT while also crawling data directly.
- Made by HuggingFace.
- Emphasizes ethics and transparency.
- All data used to train their models are open source.
- A suite of tools and APIs.
- Can be trained to write in a user’s preferred style.
- Can also generate images.
- An open-source approach.
- Designed to enable anyone with sufficient expertise to develop their own LLM.
- Built on training data curated by volunteers.
- An LLM with real-time search capabilities.
- Provides additional context and insights for more accurate results.
Most of these are free to use (within certain limits), while some are open-source. Many other products are being launched based on these LLMs, including:
- Summarizes and answers questions about submitted PDF documents.
- Aims to automate parts of researchers’ workflows.
- Identifies relevant papers and summarizes key information.
- Provides a ‘knowledge hub‘ for quick, accurate answers tailored to users’ needs.
LLM-based tools are being embedded into other products, such as web browsers. For example, extensions for the Chrome browser include:
- Gives ChatGPT internet access for up-to-date conversations.
- Autocompletes sentences in emails and elsewhere.
- Provides a ‘team of virtual assistants.’