November 2022 was an iconic month in the history of IT and AI-driven domains. OpenAI launched the ChatGPT tool (Generative Pretrained Transformer), the more ingenious predecessor of its previous version InstructGPT. Techies worldwide were amazed by this marvel’s capabilities and started evaluating its use’s good, bad, and ugly sides. While people praised the innovation and started finding benefits to use its potential optimally, experts feared its malicious use. Like any other innovation in the history of humankind, this one has also created a buzz. Let’s dive deeper into the subject and conclude with some fact-based exciting discoveries.
AI: A Shield or a Strike
Artificial Intelligence has been a groundbreaking discovery in the technology universe. Having machines learning traits, like humans, offers limitless possibilities for innovations. Intelligent apps and devices are making daily human lives more accessible and more exciting. Man is trying to reach far places where his physical access is almost impossible. On the other hand, AI is creating issues such as unemployment and massive capabilities that sick-minded people can easily manipulate, as AI is still unable to learn emotional and human wisdom while making decisions. Hence it is safe to say in the current scenario that AI’s goodness or evilness is still in the hands of humans using the same.
ChatGPT Technology Strengths and Limitations
- A large-scale language model with a vast knowledge base can perform various language tasks.
- Advanced Natural Language Processing capabilities, including text generation, language translation, question-answering, and summarization.
- High flexibility and customization allow fine-tuning for specific use cases.
- Despite its vast knowledge base, it is still prone to making mistakes and needs more common sense reasoning ability.
- It can need help with tasks that require a deeper understanding of context and causality.
- Its reliance on patterns in its training data may perpetuate biases and stereotypes present in the data it uses to train itself.
How ChatGPT Can Affect Online Exams
ChatGPT tool can potentially affect remote online examinations by providing answers to exam questions through its extensive language model training. This could lead to academic dishonesty and compromise the credibility of the exam. Besides questions with definite answers, it can also write small to medium essays. The process can compromise exams where students need to exhibit their knowledge and creativity.
Apart from being unethically useful, the ChatGPT app can potentially help students in online proctored exams in the following ways:
- Understanding exam format and instructions: ChatGPT can answer questions related to the structure and instructions of the exam, such as the time limit, number of questions, and allowed resources.
- Vocabulary and terminology: ChatGPT can help to understand and recall the meaning of difficult words and phrases used in the exam.
- Practice questions: ChatGPT can help to answer practice questions to prepare for the exam and identify areas where students may need additional study.
However, it’s important to note that using ChatGPT to answer exam questions directly is considered academic misconduct and can result in severe consequences, such as failing the exam or being expelled from school/college. Online exams should be taken honestly and with proper preparation.
Online Proctoring Software: Buckle Up
ChatGPT is undoubtedly a groundbreaking technology, and it will surely be helpful in numerous constructive activities globally. Unwanted misuse is also going to happen, like online exam cheating.
To prevent incidences of cheating with the use of ChatGPT, it is crucial for exam administrators to implement strict security measures. Technology such as proctoring software like Proctortrack monitors students during exams and ensures the integrity of the testing process. Additionally, exam questions should be regularly changed and randomized to minimize the risk of answers being readily available.
Here are a few ways online proctoring systems like Proctortrack uses technology to prevent cheating using the ChatGPT tool:
Live human proctoring:
A human online exam proctor can monitor the exam-taker through a webcam and microphone to connect with them and answer any questions they may have during the exam. Such invigilation can also monitor if the students are exhibiting any kind of suspicious behavior.
Artificial intelligence algorithms:
AI algorithms can detect cheating by analyzing exam-takers behavior, such as changes in typing speed or mouse movement, attempting to access the OpenAI website, and flagging any suspicious activity for review.
Exam-takers may be required to use a secure, locked-down browser that prevents them from accessing other websites ( including OpenAI GPT) or programs during the exam.
The questions and answers in an online exam should come randomly to make it difficult for exam-takers to share solutions or cheat using ChatGPT.
The success of an online proctoring system depends on many factors. It combines technology, human oversight, and best practices in exam design and administration.
ChatGPT cannot adversely affect academic integrity in online proctored exams without the malicious intent of the user. It is a language model having a design that helps to generate human-like text in response to prompts. It cannot directly impact the security or fairness of an online exam. However, it is possible that, like other AI systems, it may have unintended consequences that compromise the integrity of the exam. Additionally, as with any technology, there is always the potential for bugs or malfunctions that could affect the exam’s fairness. Online exam proctoring tools like Proctortrack must undergo thorough testing and validation to minimize the risk of any adverse effects on the exam with ChatGPT’s existence in the technology domain.