- Joined
- 7/31/25
- Messages
- 43
- Points
- 8
I believe that both extremes — completely rejecting the use of AI in coding because it is “harmful,” and over-relying on AI without any independent thought — are equally one-sided. To me, AI should be viewed as a personal, highly efficient secretary or assistant. We need an assistant to handle tedious or repetitive tasks, but the creative and original parts must still come from us. I hope everyone can read my argument patiently before responding.
In today’s world, where AI is developing faster than ever, refusing to use it entirely only lowers efficiency. After all, most companies or courses (unless they explicitly prohibit AI use) only care about your results — not how you wrote the code. Frankly, writing an entire project by hand is nothing to boast about anymore. It’s like choosing to swim across the Pacific Ocean when you could just fly from Los Angeles to Beijing. For large projects, writing every line manually is not only time-consuming but also error-prone. Moreover, AI coding tools such as ChatGPT or Claude produce code that is beautifully formatted — clean, concise, and logically structured. Modifying parameters later becomes much easier. If your project will be handed off to someone else or evaluated by others, this clarity is invaluable.
As a student majoring in Mathematics with a minor in Data Science, I deal with a lot of code. I once took over two projects: one clearly refined with GPT, and one completely handwritten. The GPT-assisted code was so neat that I understood the logic within ten minutes and could immediately continue the work. The handwritten one, however, was chaotic — inconsistent parameters, messy indentation, unclear structure. I lost patience after a day and ended up asking GPT to help me understand what the code was doing (AI use was allowed for that project, I never proud of violating rules).
Now, I want to make my stance clear: AI should not be abused, and AI is a highly efficient personal assistant. In one of my university courses, I admit I once overused AI — though AI use was permitted. I copied and pasted code I didn’t fully understand. My professor, a respected MIT research scientist, spoke with me privately for an hour. He said:
These concepts are fully original and must be deeply understood. However, in future applications, I believe AI can still help. With the foundation I’ve built from that course, I can now refine and adjust AI-generated code, making it more elegant, efficient, and — most importantly — reflective of my own thinking.
That, in my opinion, is the ideal balance: AI-assisted, but human-driven.
Some examples of myself using AI as assistant:
1. During my Calculus course, I had to master a large number of theorems and techniques. Before the exam, I spent some time talking with ChatGPT — going through every theorem and corollary one by one. It helped me format everything neatly in Markdown — from integration by parts, the quotient rule, integration by substitution, to all the derivatives and integrals of trigonometric functions. Together, we built an extremely complete Calculus notebook, which I later imported into Google Colab and exported as a PDF for daily review. I ended up scoring a full mark on every exam in that class.
2. Similarly, when preparing for the IELTS speaking test, I spent a week practicing conversations with ChatGPT on various topics. My final speaking score was 7.0, which I think is quite decent. These experiences truly showed me how AI can act as an efficient and reliable personal assistant when used properly.
I welcome everyone to share your perspectives — and professors, I’d especially love to hear your more experienced insights. @Daniel Duffy @Andy Nguyen
In today’s world, where AI is developing faster than ever, refusing to use it entirely only lowers efficiency. After all, most companies or courses (unless they explicitly prohibit AI use) only care about your results — not how you wrote the code. Frankly, writing an entire project by hand is nothing to boast about anymore. It’s like choosing to swim across the Pacific Ocean when you could just fly from Los Angeles to Beijing. For large projects, writing every line manually is not only time-consuming but also error-prone. Moreover, AI coding tools such as ChatGPT or Claude produce code that is beautifully formatted — clean, concise, and logically structured. Modifying parameters later becomes much easier. If your project will be handed off to someone else or evaluated by others, this clarity is invaluable.
As a student majoring in Mathematics with a minor in Data Science, I deal with a lot of code. I once took over two projects: one clearly refined with GPT, and one completely handwritten. The GPT-assisted code was so neat that I understood the logic within ten minutes and could immediately continue the work. The handwritten one, however, was chaotic — inconsistent parameters, messy indentation, unclear structure. I lost patience after a day and ended up asking GPT to help me understand what the code was doing (AI use was allowed for that project, I never proud of violating rules).
Now, I want to make my stance clear: AI should not be abused, and AI is a highly efficient personal assistant. In one of my university courses, I admit I once overused AI — though AI use was permitted. I copied and pasted code I didn’t fully understand. My professor, a respected MIT research scientist, spoke with me privately for an hour. He said:
His words stayed with me. Later, I read similar opinions from other professors — that AI automates low-efficiency, repetitive work, but this only makes our originality more important than ever. Indeed, AI will replace many repetitive tasks. For example, in data analysis, AI can instantly generate a full matplotlib plot in Python — axes, colors, scales, everything — without you touching a single line. But we must decide what to plot and why: e.g., when visualizing racial data, a pie chart may best show proportional differences.“I completely allow the use of AI, but you must first understand the knowledge behind it. Your code shows no independent thought. My assignments are not designed for AI to solve directly — you need to combine your understanding and creativity with AI’s assistance.”
AI can do the routine part, but the insight — choosing the right visualization to express meaning — must come from us. That’s why I see AI as a powerful partner. It thinks comprehensively, structures code neatly, and works incredibly fast. But our ideas, creativity, and innovations are irreplaceable — they come only through deep learning and real experience. For instance, in my C++ Programming for Financial Engineering course (don’t worry, I strictly followed the no-AI rule there), every topic felt precious — distilled from the wisdom of top quantitative finance institutions. I took detailed notes and carefully studied each example, analyzing how algorithms and matrices were constructed, and even wrote my own check list program to enhance the understanding.import matplotlib.pyplot as plt
labels = ['White', 'Black', 'Asian', 'Hispanic']
sizes = [45, 25, 20, 10]
plt.pie(sizes, labels=labels, autopct='%1.1f%%', startangle=90)
plt.title('Racial Composition of a Population')
plt.show()
These concepts are fully original and must be deeply understood. However, in future applications, I believe AI can still help. With the foundation I’ve built from that course, I can now refine and adjust AI-generated code, making it more elegant, efficient, and — most importantly — reflective of my own thinking.
That, in my opinion, is the ideal balance: AI-assisted, but human-driven.
Some examples of myself using AI as assistant:
1. During my Calculus course, I had to master a large number of theorems and techniques. Before the exam, I spent some time talking with ChatGPT — going through every theorem and corollary one by one. It helped me format everything neatly in Markdown — from integration by parts, the quotient rule, integration by substitution, to all the derivatives and integrals of trigonometric functions. Together, we built an extremely complete Calculus notebook, which I later imported into Google Colab and exported as a PDF for daily review. I ended up scoring a full mark on every exam in that class.
2. Similarly, when preparing for the IELTS speaking test, I spent a week practicing conversations with ChatGPT on various topics. My final speaking score was 7.0, which I think is quite decent. These experiences truly showed me how AI can act as an efficient and reliable personal assistant when used properly.
I welcome everyone to share your perspectives — and professors, I’d especially love to hear your more experienced insights. @Daniel Duffy @Andy Nguyen