We’re living in a time of much political uncertainty, and there’s not many voices loud enough to cut through the persistent misinformation. Facts and clarity are in short supply across social media platforms and curious, impressionable students need answers to the tough questions surrounding politics right now. Questions like: How does Elon Musk have so many children? What did vaccines ever do to Robert F. Kennedy? Who would win in a fight, JD Vance or Marco Rubio?
“These and many more will all be answered in good time my friend.” said Glen Halva-Neubaeuer, chair of the Politics and International Affairs Department here at Furman. Halva-Neubauer among others in his department have pooled their efforts into developing a generative AI specifically designed to fight misinformation on campus, called FART (Furman Accurate Reporting Technology). “Its entirely evidence-based, practical approach to politics is designed to go against the norm of the department’s idealistic delusion.” said Halva-Neubauer.
I then asked about the model’s creation. “We started by feeding FART old tests from students across the department but soon realized that data was unreliable. After removing those in Greek Life from the data pool though, the results improved drastically. This remaining data was supplemented by the Truth Social account of the Furman Barber, which was obtained only after disarming the individual and receiving a decent fade.”
We soon logged on to the AI, which was surprisingly user-friendly. I started by asking FART what it thought of the Trump administration. “Finally we’re back in charge. Have fun getting owned, snowflake.” I then asked it to list his favorite Americans of all time. “George Washington, Alexis de Tocqueville, and Joe Rogan.” After informing it that Tocqueville was indeed French, FART said, “He’s more American than a lot of these kids around here.”
After using it for a while though, I became aware of some limitations of the model. FART crashed every time a user typed in ‘climate change’ or ‘critical race theory’. At one point I mentioned DEI, to which the AI called me a “Wokearista” and started repeatedly asking if I planned on ‘cancelling’ it. It was unwilling to answer any questions concerning the 2020 election, and only responded to the recent one with the phrase, “We’re so back.”
Even when fully answering questions, often the information was faulty or misleading. Simple questions about the moon would lead to FART ranting about how the moon was a Huawei-projected hologram and that waves “just happen on their own.” Also, when pressed, the model will rescind information and say, “It’s a joke alright? I didn’t actually mean it.” I’m pretty sure the phrase “legalize comedy” was used pretty explicitly as well.
Halva-Neubauer then showed me the variety of different settings FART had. “It’s personalized to be completely effective in any scenario,” he said proudly. The Mock Trial mode simulates an over-caffeinated, ridiculously anxious and opinionated student. The FUSAB mode rids the model of any emotional depth and provides information in an almost incoherent, superficial jargon. The Athlete mode provides a below-average intelligence coupled with an above-average self of sense and ego.
“Here, let me throw on The Paladin mode. You’ll get a kick out of this!” said Halva-Neubauer. The AI instantly became entirely too pretentious and pedantic, whining about SGA meetings and how mean the administration is. It started drafting another KA article because “Why not? We don’t have any other ideas” before I stopped it. Seeing those idiots on The Paladin displayed so accurately here was good enough for me, and I thanked Halva-Neubauer and went on my way.
FART will be readily available for students in Fall 2025, so be on the lookout. They’re feeding it YDSA Yikyaks throughout the summer for further data training, and several POL majors have even suggested using The Horse as a further supplement. Let us know what you think about FART and its place at Furman in the future!