Use this page to find out more about the current LUH AI guidelines. Individual sections of the guidelines are discussed in the FAQ below. The glossary explains the terms used in the guidelines.
Foreword
Questions regarding how to deal with AI systems now form a significant part of our everyday work in the area of teaching and examinations at LUH. Our goal is to enable an exploratory and open approach to AI systems wherever it makes sense from a technical and didactic point of view, and to promote data literacy among our students.
Data literacy is an important part of the reflective, transformative ability to act. This ability is outlined in the LUH Teaching Constitution as a guiding principle for all study programmes and is an essential prerequisite for students to be able to contribute responsibly to overcoming social, ecological and economic challenges.
The AI guidelines reflect the current state of the internal discussion at LUH on AI in teaching; they also take into account the current legal situation. They are intended to support the sensible, didactically justified and legally compliant use of AI and are updated annually during the summer semester. The following explanations provide additional details regarding the individual sections of the AI guidelines.
Prof. Dr. Julia Gillen, Vice President for Education at 鶹
Last updated: 2 June 2025
Guidelines
Please note: This document is a translation and is provided for information purposes only. In the event of any inconsistency between the German version and the English version, only the German version shall apply.
-
Section 1: Purpose and scope
(1) These guidelines regulate the use of artificial intelligence (AI) in teaching at LUH, with the aim of implementing the requirements of Regulation (EU) 2024/1689 on artificial intelligence () and ensuring the safe and responsible use of AI systems.
(2) The guidelines are binding for all teaching staff members and users at LUH who utilise AI systems as part of teaching at LUH.
-
Section 2: Responsibilities
(1) Teaching staff are responsible for complying with these guidelines when using AI systems in their courses.
(2) Users are obliged to observe the provisions set out in these guidelines when using AI systems.
-
Section 3: Teaching AI skills
(1) LUH offers training courses to promote AI competence for all users.
(2) Participation in training measures in accordance with paragraph 1 must be documented.
(3) Teaching staff who use AI systems in a course are obliged to integrate training materials into their courses or to offer appropriate training for users. This applies particularly to the use of non-centralised AI systems in courses. If the AI system is provided centrally by LUH, a reference to the central training offered by LUH is sufficient.
-
Section 4: Use of centrally provided and non-centralised AI systems
(1) The AI systems provided centrally by LUH shall be given preference. The compulsory use of such systems in courses is permitted.
(2) The compulsory use of non-centralised AI systems in courses requires the approval of the LUH AI Council and is only permitted if no personal data is processed.
(3) The use of non-centralised systems must be documented and, if the AI Council’s approval is not required, reported to the AI Council.
-
Section 5: Terms of use
(1) AI systems may be used only in accordance with the terms of use outlined by the provider and operator.
(2) Changes to the purpose of use when using AI systems are prohibited.
-
Section 6: Data protection and personal rights
(1) The processing of personal data in AI systems is not permitted without the explicit authorisation of the AI Council.
(2) The entry of trade secrets and sensitive research data into AI systems is prohibited in order to prevent the unauthorised use or disclosure of such secrets.
-
Section 7: Copyright
(1) When entering copyright-protected texts into AI systems, users must comply with the applicable copyright provisions.
(2) Results obtained from AI systems are considered to be in the public domain unless they have been further developed by the user through a significant amount of intellectual effort.
-
Section 8: Use in examinations
(1) Only AI systems provided centrally by LUH may be used in examinations.
(2) The assessment type must uphold the principle of equal opportunities and be compatible with the examination regulations.
(3) Assessments must be evaluated individually by a person; automated correction is not permitted.
-
Section 9: Prohibited use
(1) Practices that are prohibited under Article 5 of the EU AI Act are prohibited at LUH.
(2) Other impermissible use scenarios include, in particular, the entry of personal data from third parties and the automated evaluation and assessment of examinations.
-
Section 10: High-risk AI systems
The use of high-risk AI systems in teaching is prohibited unless the AI Council approves exceptions in compliance with the EU AI Act.
-
Section 11: Entry into force and review
(1) These guidelines shall enter into force on the day following their approval by the Presidential Board.
(2) The guidelines shall be reviewed regularly and adapted to technical and legal developments.
FAQ
-
Regarding Section1: Purpose and scope
The AI guidelines address the use of artificial intelligence (AI) in teaching at 鶹 and are binding.
The aim of the guidelines is to implement the requirements resulting from the European Artificial Intelligence Act (hereinafter referred to as the ), which entered into force on 1 August 2024, in the field of teaching and to ensure the proper use of AI systems at LUH.
By providing central AI systems, LUH is an operator within the meaning of Art. 3 No. 4 of the EU AI Act. The topics covered include responsibilities (section 2), teaching of AI skills (section 3), use of AI (sections 4–5), legal and derived framework conditions (sections 6–10).
-
Regarding Section 3: Teaching AI skills
As an operator of AI systems, LUH ensures that users are properly trained to use AI systems and that their AI skills are developed (Art. 4 EU AI Act). Participation in relevant training is documented.
LUH provides central training materials that highlight the risks and requirements associated with the use of AI systems. If an AI system is to be used in a course, the teaching staff are obliged to train the users accordingly or to integrate the training materials into the course.
-
Regarding Section 4: Use of centrally provided and non-centralised AI systems
Regarding (2): The compulsory use of non-centralised AI systems in courses requires the approval of the LUH AI Council and is only permitted if no personal data is processed.
Non-centralised AI systems can be either services from external providers (including AI systems provided as part of university cooperation agreements, where these are not included in the) or tools that have been programmed by members of teaching staff.
If non-centralised services from external providers are to be used, the and provide initial guidance. The compulsory use of non-centralised services in teaching is only permitted following approval by the AI Council. The AI Council requires the following information for this:
- Description of the AI system (including whether personal registration/a personal account is required, and whether usage data are used to train the AI)
- Purpose of use and application scenario
- What data will be collected during use?
- For external services: provider’s location and website
The same requirements apply in cases where the AI Council’s special authorisation is requested for the use of high-risk AI systems in accordance with Section 10 of these guidelines.
If, after careful consideration, a teaching staff member wishes to use additional AI systems, this may be done, provided that such use does not contradict the EU AI Act or these guidelines, students are permitted to use the additional systems voluntarily, and the use of these systems is not mandatory for achieving learning objectives or completing coursework.
Regarding (3): The use of non-centralised systems must be documented and, if the AI Council's approval is not required, reported to the AI Council.
The teaching staff member is responsible for complying with the requirements and must document this and report the use to the AI Council. No personal data from the users may be processed when non-centralised services are accessed (registration and login data) or used. When these systems are utilised, teaching staff are obliged to inform students about the service themselves, as the central LUH training materials do not cover non-centralised systems. A declaration from the member of teaching staff is sufficient to satisfy the documentation requirement.
Self-programmed AI systems or AI models developed as part of independent study assignments or final theses are exempted because they are for research purposes (Art. 2, No. 6 EU AI Act). Their use does not need to be documented or reported to the AI Council.
AI systems used exclusively by the teaching staff member as part of a course do not need to be reported to or presented for approval to the AI Council.
-
Regarding Section 6: Data protection and personal rights
Regarding (1): The processing of personal data in AI systems is not permitted without the explicit authorisation of the AI Council.
This prevents the personal rights of a real person from being violated through the assignment to that individual of AI-generated results that contain false attributes, characteristics or statements.
Regarding (2): The entry of trade secrets and sensitive research data into AI systems is prohibited in order to prevent the unauthorised use or disclosure of such secrets.
The entry of secrets within the meaning of Section 2 (1) of the German Trade Secrets Act (GeschGehG) may constitute unauthorised use or disclosure within the meaning of Section 4 (2) and (3) GeschGehG. This could be the case, for example, if thesis papers or placement reports that have involved cooperation with a company are entered into AI systems or if research results are entered for evaluation by an AI system. When using AI systems, the entry of trade secrets and sensitive research information must be avoided. The unauthorised use or disclosure of trade secrets may have civil, labour or criminal law consequences.
-
Regarding Section 7: Copyright
When it comes to copyright, a distinction can be made between active input during prompting or in the course of the chat and the use of the results. If texts are copied into the system for analysis or editing, this constitutes reproduction.
Regarding (1): When entering copyright-protected texts into AI systems, the applicable copyright provisions must be observed.
• You may upload your own texts.
• Texts belonging to others may be reproduced with their personal consent or, in the case of texts in the public domain, if a licence permits this or permission is granted by law.
Provided that the input is not used to train the AI and the process is not stored, this should constitute temporary reproduction within the meaning of Section 44a of the German Copyright Act (UrhG) and thus constitute privileged use. However, if the input leads to a change in the work, the use still requires the permission of the author according to the European Court of Justice (2012).
Regarding (2): The results of AI systems are considered to be in the public domain unless they have been further developed by the user through a significant amount of intellectual effort.
For protection under copyright law, a personal intellectual creation by a human being is necessary. Since machines cannot achieve this, the results generated are generally considered not to be protected by copyright and therefore to be in the public domain. Users of this content can only claim authorship if the AI-generated results have been edited, further developed, etc. to a significant extent through their own intellectual effort, i.e. if the AI is only used as a tool for their own work.
Since the use of AI-generated content can still (unintentionally) lead to copyright infringements, e.g. due to strong similarities between the AI-generated content and existing copyright-protected works, it is strongly recommended that the results generated be checked (e.g. by reverse searching or similar methods on the Internet).
-
Regarding Section 8: Use in examinations
There are three key aspects to examinations: the production of the work to be assessed, the documentation and submission of this item as an independently produced piece of work, and the assessment process. The type and scope of documentation to be provided by examination candidates regarding the use of AI systems in assessments (citation method, etc.) must be made clear in advance.
Regarding (2): The assessment type must uphold the principle of equal opportunities and be compatible with the examination regulations.
An assessment type suitable for the use of AI must be selected. In particular, the use of AI must not be expected to have any negative impacts with respect to the principle of equal opportunities. Teaching staff are responsible for designing the assessment accordingly and checking in advance whether its use is in line with the relevant examination regulations.
Regarding (3): Assessments must be evaluated individually by a person; automated correction is not permitted.
Assessments may not be loaded into AI systems without consent, nor may they be corrected by AI systems alone. Students are entitled to individual (human) assessment, meaning that all written assignments (AA) must be personally reviewed and graded by the examiners.
The extent to which the use of AI systems in examinations will lead to specific stipulations in the examination regulations and, if necessary, in the model examination regulations (MPO) is currently the subject of internal discussions at LUH (Teaching and Academic Programmes Working Group, vice president for education). The amendment to the MPO will be passed by the Senate at the end of 2025 and will then form the basis for all examination regulations.
-
Sources
- EuGH Beschluss vom 17. Januar 2012, Infopaq II, C-302/10, ECLI:EU:C:2012:16, Rn. 54
- Graupe, S., Horstmann, J. & Pfeiffenbring, J. (3 Aufl. 2024). Datenschutzrechtliche Informationen zur Durchführung von Forschungsvorhaben an der Leibniz Universität Hannover.
- Hofmann, F. (2024). Retten Schranken Geschäftsmodelle generativer KI-Systeme, ZUM 2024, 166-173.
- Maamar, N. (2023). Urheberrechtliche Fragen beim Einsatz von generativen KI-Systemen. ZUM 2023, (7) 481-491
- Salden, P. & Leschke, J. (2023). Didaktische und rechtliche Perspektiven auf KI-gestütztes Schreiben in der Hochschulbildung. Ruhr-Universität Bochum. .
- EuGH Beschluss vom 17. Januar 2012, Infopaq II, C-302/10, ECLI:EU:C:2012:16, Rn. 54
Glossary
-
Non-centralised services
AI applications that are not provided by LUH. These can be either services from external providers (including AI systems provided as part of university cooperation agreements, where these are not included in the ) or tools that have been programmed by members of teaching staff.
-
High-risk AI systems
AI systems that can determine an individual’s educational and professional path and thus influence this individual’s ability to obtain an income are classified as high-risk within the meaning of Art. 6 No. 2 of the EU AI Act. This includes systems used to determine access or admission to educational institutions, to grade examinations and assessments, and to detect unauthorised behaviour during examinations. AI-supported feedback tools (“learning analytics”) which, for example, provide recommendations regarding participation in examinations are also classified as high-risk systems.
-
AI provider
A ‘provider’ within the meaning of Article 3(3) of the EU AI Act is someone who places an AI application covered by the act on the market or puts it into service. This can be an individual or a company. A provider is responsible for ensuring that the AI application complies with the rules of the act before it is made available to others, whether by sale or by provision for use.
-
AI operator
An ‘operator’ within the meaning of Article 3(4) of the EU AI Act is a person or company that uses an AI application. This means that the operator is responsible for how this AI application is actually used.
In contrast to the ‘supplier’, who places the AI application on the market, the operator is the person who uses the AI application for specific purposes or in specific situations. The operator must also ensure that the use of the AI application complies with the rules and requirements of the act.
Note: Teaching staff who require the compulsory use of tools not provided by the university thereby change their role. They go from being users to operators of an AI application – with all the legal obligations and consequences that this entails.
-
LUH AI Council
A decision-making body responsible for assessing the use of AI systems that are not operated by LUH. In addition, it monitors the implementation of the EU AI Act as well as updates to the university’s AI guidelines and other accompanying measures. It is composed of the vice president for education, the chief information officer (CIO), the data protection officer and one representative each from the technical, legal and teaching areas. It is coordinated by the CIO.
-
Personal data
The term ‘personal data’ is defined in Art. 4, No. 1 GDPR. According to the regulation, personal data is any information relating to an identified or identifiable natural person (or ‘data subject’). A natural person is considered identifiable if they can be identified, directly or indirectly, in particular by association with an identifier such as a name, an identification number, location data, an online identifier or one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person. The reference to a specific person can therefore result directly or indirectly (e.g. through the linking of different data sets or through elimination procedures) from various identity-related characteristics (Graupe, Horstmann & Pfeiffenbring, 2024).
Examples include
- Name
- Enrolment number
- Other individual identifiers (LUH ID/WebSSO number)
- Personal email addresses
- Photos
- Video and audio recordings
-
Prohibited practices (Art. 5 EU AI Act)
The EU AI Act aims to prohibit certain practices that are considered too dangerous or unethical. These are described in Article 5 of the act. Prohibited practices are those that pose an unacceptable risk. These include, for example, AI systems that manipulate people in order to influence their behaviour in a harmful way, or systems that allow people to be closely monitored without their knowledge. AI applications that divide people into groups based on characteristics such as appearance or voice, which could lead to discrimination, are also prohibited.
-
Central services
AI applications provided centrally by LUH.
-
Sources
- Graupe, S., Horstmann, J. & Pfeiffenbring, J. (3 Aufl. 2024). Datenschutzrechtliche Informationen zur Durchführung von Forschungsvorhaben an der Leibniz Universität Hannover.
Contact
Please direct any further questions or suggestions regarding the use of AI systems in teaching to the office of the chief information officer (CIO). In the event of changes to AI tools provided by LUH, these guidelines shall continue to apply mutatis mutandis.