by Eddie Garmat
My Experience
Recently I got the opportunity to serve as the lead of an engineering team at my university. One of the challenges I faced: another team member was set on using ChatGPT for everything. He was trying to convince others to use ChatGPT to write our code, develop our circuits, and even to design our chassis. This made me wonder, if he could get a degree without trying to understand any of the material, how can we test that real-world engineers don’t just use AI for all their projects? Turns out there actually are already some solutions to this problem that are being developed and tested in the professional world, such as Capstone projects, interview tests, and more.
Interview Options
When interviewing an applicant, an interviewer has a couple of options to test the interviewee’s actual skill. The first option is an oral test with field questions where the applicant does not have access to any tools. This forces him or her to present personal knowledge to the interviewer. However, some may argue that the pressure of a face-to-face conversation may diminish some people’s knowledge. In this case, a closed testing environment is ideal. Similar to a test in school, filling out a test with field-related questions and no access to external tools forces an applicant to display his or her knowledge. What’s more, research with professionals shows that verifying these skills in an interview is even more key than the secondary degree itself.
Hands-on Work
Many majors include projects where the students have to do hands-on work which can’t be completed by AI. In engineering specifically, projects like this include line-following robots, rockets, and wall-climbing robots. Many fields may also include a Capstone project, which is a project that allows a student to apply his or her knowledge of a subject to real-world problems. Assessing projects like these may become critical when it comes to analyzing potential candidates’ expertise.
During my team project, not applying this knowledge ended up being how my classmate got exposed for not understanding what we were working on. To complete the project, we had to present our work that demonstrated both evidence of contribution and understanding of work, which my classmate could not provide.
Long-term evaluation
Another approach is to assess a candidate over time by looking at the quality of outputs like presentations and projects over time. While artificial intelligence may be great at writing, if the human prompting it doesn’t fully understand the project, neither will the AI. This will become evident over a period of time where writing and presenting may seem good at a surface level but does not fully prove an understanding of a project’s specifications. Tracking these capabilities, in fact, may be one of the best use cases of AI for human resources; Forbes reports that 93% of Fortune 500 Chief HR officers are using AI for this purpose.
Knowing vs. Prompting
AI has triggered a rise in students who can attempt a degree in higher education without having to understand much of what that degree is supposed to reflect. However, these people will be found out during job interviews and job evaluations. Long-term evaluation strategies can help determine whether candidates possess a genuine understanding of their field, weeding out individuals who depend excessively on AI to complete their tasks.
Want to know more about AlignIQ’s best practices for integrating AI into educational domains? Aligning AI With Purpose: Our Do’s & Don’ts
Want to compare the roles of AI and human cognition? AI Executes, Humans Interpret: The Future of Intelligence
Need insight into AlignIQ founders’ focus on human-centric approaches to AI integration? Why I Started AlignIQ: A Vision of Human-Centric AI
Leave a Reply