On Might 8, O’Reilly Media shall be internet hosting Coding with AI: The Finish of Software program Growth as We Know It—a dwell digital tech convention spotlighting how AI is already supercharging builders, boosting productiveness, and offering actual worth to their organizations. For those who’re within the trenches constructing tomorrow’s growth practices as we speak and concerned about talking on the occasion, we’d love to listen to from you by March 5. You will discover extra data and our name for shows right here.
Hello, I’m a professor of cognitive science and design at UC San Diego, and I lately wrote posts on Radar about my experiences coding with and talking to generative AI instruments like ChatGPT. On this publish I need to speak about utilizing generative AI to increase certainly one of my tutorial software program tasks—the Python Tutor instrument for studying programming—with an AI chat tutor. We frequently hear about GenAI being utilized in large-scale industrial settings, however we don’t hear almost as a lot about smaller-scale not-for-profit tasks. Thus, this publish serves as a case examine on including generative AI into a private challenge the place I didn’t have a lot time, assets, or experience at my disposal. Engaged on this challenge obtained me actually enthusiastic about being right here at this second proper as highly effective GenAI instruments are beginning to turn out to be extra accessible to nonexperts like myself.
For some context, over the previous 15 years I’ve been working Python Tutor (https://pythontutor.com/), a free on-line instrument that tens of hundreds of thousands of individuals around the globe have used to put in writing, run, and visually debug their code (first in Python and now additionally in Java, C, C++, and JavaScript). Python Tutor is principally utilized by college students to know and debug their homework project code step-by-step by seeing its name stack and information constructions. Consider it as a digital teacher who attracts diagrams to indicate runtime state on a whiteboard. It’s greatest suited to small items of self-contained code that college students generally encounter in pc science courses or on-line coding tutorials.
Right here’s an instance of utilizing Python Tutor to step by way of a recursive operate that builds up a linked listing of Python tuples. On the present step, the visualization exhibits two recursive calls to the listSum
operate and varied tips to listing nodes. You may transfer the slider ahead and backward to see how this code runs step-by-step:

AI Chat for Python Tutor’s Code Visualizer
Approach again in 2009 once I was a grad pupil, I envisioned creating Python Tutor to be an automatic tutor that might assist college students with programming questions (which is why I selected that challenge identify). However the issue was that AI wasn’t almost ok again then to emulate a human tutor. Some AI researchers have been publishing papers within the subject of clever tutoring methods, however there have been no broadly accessible software program libraries or APIs that might be used to make an AI tutor. So as a substitute I spent all these years engaged on a flexible code visualizer that might be *used* by human tutors to clarify code execution.
Quick-forward 15 years to 2024, and generative AI instruments like ChatGPT, Claude, and lots of others primarily based on LLMs (massive language fashions) are actually actually good at holding human-level conversations, particularly about technical subjects associated to programming. Specifically, they’re nice at producing and explaining small items of self-contained code (e.g., underneath 100 strains), which is precisely the goal use case for Python Tutor. So with this know-how in hand, I used these LLMs so as to add AI-based chat to Python Tutor. Right here’s a fast demo of what it does.
First I designed the person interface to be so simple as attainable: It’s only a chat field beneath the person’s code and visualization:

There’s a dropdown menu of templates to get you began, however you may sort in any query you need. While you click on “Ship,” the AI tutor will ship your code, present visualization state (e.g., name stack and information constructions), terminal textual content output, and query to an LLM, which is able to reply right here with one thing like:

Word how the LLM can “see” your present code and visualization, so it might probably clarify to you what’s happening right here. This emulates what an skilled human tutor would say. You may then proceed chatting back-and-forth such as you would with a human.
Along with explaining code, one other widespread use case for this AI tutor helps college students get unstuck after they encounter a compiler or runtime error, which will be very irritating for inexperienced persons. Right here’s an index out-of-bounds error in Python:

Each time there’s an error, the instrument mechanically populates your chat field with “Assist me repair this error,” however you may choose a special query from the dropdown (proven expanded above). While you hit “Ship” right here, the AI tutor responds with one thing like:

Word that when the AI generates code examples, there’s a “Visualize Me” button beneath every one as a way to immediately visualize it in Python Tutor. This lets you visually step by way of its execution and ask the AI follow-up questions on it.
In addition to asking particular questions on your code, it’s also possible to ask common programming questions and even career-related questions like learn how to put together for a technical coding interview. As an illustration:

… and it’ll generate code examples which you could visualize with out leaving the Python Tutor web site.
Advantages over Straight Utilizing ChatGPT
The apparent query right here is: What are the advantages of utilizing AI chat inside Python Tutor quite than pasting your code and query into ChatGPT? I feel there are a number of most important advantages, particularly for Python Tutor’s audience of inexperienced persons who’re simply beginning to be taught to code:
1) Comfort – Thousands and thousands of scholars are already writing, compiling, working, and visually debugging code inside Python Tutor, so it feels very pure for them to additionally ask questions with out leaving the location. If as a substitute they should choose their code from a textual content editor or IDE, copy it into one other website like ChatGPT, after which perhaps additionally copy their error message, terminal output, and describe what’s going on at runtime (e.g., values of information constructions), that’s far more cumbersome of a person expertise. Some trendy IDEs do have AI chat built-in, however these require experience to arrange since they’re meant for skilled software program builders. In distinction, the principle attraction of Python Tutor for inexperienced persons has at all times been its ease of entry: Anybody can go to pythontutor.com and begin coding immediately with out putting in software program or making a person account.
2) Newbie-friendly LLM prompts – Subsequent, even when somebody have been to undergo the difficulty of copy-pasting their code, error message, terminal output, and runtime state into ChatGPT, I’ve discovered that inexperienced persons aren’t good at developing with prompts (i.e., written directions) that direct LLMs to supply simply comprehensible responses. Python Tutor’s AI chat addresses this downside by augmenting chats with a system immediate like the next to emphasise directness, conciseness, and beginner-friendliness:
You might be an skilled programming trainer and I’m a pupil asking you for assist with
${LANGUAGE}
.
– Be concise and direct. Hold your response underneath 300 phrases if attainable.
– Write on the degree {that a} newbie pupil in an introductory programming class can perceive.
– If you’ll want to edit my code, make as few adjustments as wanted and protect as a lot of my authentic code as attainable. Add code feedback to clarify your adjustments.
– Any code you write ought to be self-contained and runnable with out importing exterior libraries.
– Use GitHub Flavored Markdown.
It additionally codecs the person’s code, error message, related line numbers, and runtime state in a well-structured manner for LLMs to ingest. Lastly, it offers a dropdown menu of widespread questions and instructions like “What does this error message imply?” and “Clarify what this code does line-by-line.” so inexperienced persons can begin crafting a query immediately with out watching a clean chat field. All of this behind-the-scenes immediate templating helps customers to keep away from widespread issues with immediately utilizing ChatGPT, such because it producing explanations which might be too wordy, jargon-filled, and overwhelming for inexperienced persons.
3) Operating your code as a substitute of simply “trying” at it – Lastly, in the event you paste your code and query into ChatGPT, it “inspects” your code by studying over it like a human tutor would do. Nevertheless it doesn’t really run your code so it doesn’t know what operate calls, variables, and information constructions actually exist throughout execution. Whereas trendy LLMs are good at guessing what code does by “trying” at it, there’s no substitute for working code on an actual pc. In distinction, Python Tutor runs your code in order that if you ask AI chat about what’s happening, it sends the actual values of the decision stack, information constructions, and terminal output to the LLM, which once more hopefully leads to extra useful responses.
Utilizing Generative AI to Construct Generative AI
Now that you just’ve seen how Python Tutor’s AI chat works, you is perhaps questioning: Did I exploit generative AI to assist me construct this GenAI characteristic? Sure and no. GenAI helped me most once I was getting began, however as I obtained deeper in I discovered much less of a use for it.
Utilizing Generative AI to Create a Mock-Up Person Interface
My method was to first construct a stand-alone web-based LLM chat app and later combine it into Python Tutor’s codebase. In November 2024, I purchased a Claude Professional subscription since I heard good buzz about its code technology capabilities. I started by working with Claude to generate a mock-up person interface for an LLM chat app with acquainted options like a person enter field, textual content bubbles for each the LLM and human person’s chats, HTML formatting with Markdown, syntax-highlighted code blocks, and streaming the LLM’s response incrementally quite than making the person wait till it completed. None of this was progressive—it’s what everybody expects from utilizing a LLM chat interface like ChatGPT.
I preferred working with Claude to construct this mock-up as a result of it generated dwell runnable variations of HTML, CSS, and JavaScript code so I may work together with it within the browser with out copying the code into my very own challenge. (Simon Willison wrote a nice publish on this Claude Artifacts characteristic.) Nonetheless, the principle draw back is that at any time when I request even a small code tweak, it could take as much as a minute or so to regenerate all of the challenge code (and typically annoyingly go away components as incomplete […] segments, which made the code not run). If I had as a substitute used an AI-powered IDE like Cursor or Windsurf, then I might’ve been in a position to ask for fast incremental edits. However I didn’t need to hassle establishing extra advanced tooling, and Claude was ok for getting my frontend began.
A False Begin by Regionally Internet hosting an LLM
Now onto the backend. I initially began this challenge after enjoying with Ollama on my laptop computer, which is an app that allowed me to run LLMs regionally without cost with out having to pay a cloud supplier. A couple of months earlier (September 2024) Llama 3.2 had come out, which featured smaller fashions like 1B and 3B (1 and three billion parameters, respectively). These are a lot much less highly effective than state-of-the-art fashions, that are 100 to 1,000 instances greater on the time of writing. I had no hope of working bigger fashions regionally (e.g., Llama 405B), however these smaller 1B and 3B fashions ran high-quality on my laptop computer in order that they appeared promising.
Word that the final time I attempted working an LLM regionally was GPT-2 (sure, 2!) again in 2021, and it was TERRIBLE—a ache to arrange by putting in a bunch of Python dependencies, superslow to run, and producing nonsensical outcomes. So for years I didn’t suppose it was possible to self-host my very own LLM for Python Tutor. And I didn’t need to pay to make use of a cloud API like ChatGPT or Claude since Python Tutor is a not-for-profit challenge on a shoestring price range; I couldn’t afford to supply a free AI tutor for over 10,000 every day lively customers whereas consuming all of the costly API prices myself.
However now, three years later, the mix of smaller LLMs and Ollama’s ease-of-use satisfied me that the time was proper for me to self-host my very own LLM for Python Tutor. So I used Claude and ChatGPT to assist me write some boilerplate code to attach my prototype net chat frontend with a Node.js backend that known as Ollama to run Llama 1B/3B regionally. As soon as I obtained that demo engaged on my laptop computer, my objective was to host it on a number of college Linux servers that I had entry to.
However barely one week in, I obtained dangerous information that ended up being an enormous blessing in disguise. Our college IT of us informed me that I wouldn’t be capable of entry the few Linux servers with sufficient CPUs and RAM wanted to run Ollama, so I needed to scrap my preliminary plans for self-hosting. Word that the sort of low-cost server I needed to deploy on didn’t have GPUs, in order that they ran Ollama far more slowly on their CPUs. However in my preliminary exams a small mannequin like Llama 3.2 3B nonetheless ran okay for a number of concurrent requests, producing a response inside 45 seconds for as much as 4 concurrent customers. This isn’t “good” by any measure, nevertheless it’s the very best I may do with out paying for a cloud LLM API, which I used to be afraid to do given Python Tutor’s sizable userbase and tiny price range. I figured if I had, say 4 reproduction servers, then I may serve as much as 16 concurrent customers inside 45 seconds, or perhaps 8 concurrents inside 20 seconds (tough estimates). That wouldn’t be the very best person expertise, however once more Python Tutor is free for customers, so their expectations can’t be sky-high. My plan was to put in writing my very own load-balancing code to direct incoming requests to the lowest-load server and queuing code so if there have been extra concurrent customers making an attempt to attach than a server had capability for, it could queue them as much as keep away from crashes. Then I would wish to put in writing all of the sysadmin/DevOps code to watch these servers, hold them up-to-date, and reboot in the event that they failed. This was all a frightening prospect to code up and take a look at robustly, particularly as a result of I’m not an expert software program developer. However to my aid, now I didn’t need to do any of that grind for the reason that college server plan was a no-go.
Switching to the OpenRouter Cloud API
So what did I find yourself utilizing as a substitute? Serendipitously, round this time somebody pointed me to OpenRouter, which is an API that permits me to put in writing code as soon as and entry quite a lot of paid LLMs by altering the LLM identify in a configuration string. I signed up, obtained an API key, and began making queries to Llama 3B within the cloud inside minutes. I used to be shocked by how straightforward this code was to arrange! So I rapidly wrapped it in a server backend that streams the LLM’s response textual content in actual time to my frontend utilizing SSE (server-sent occasions), which shows it within the mock-up chat UI. Right here’s the essence of my Python backend code:
import openai # OpenRouter makes use of the OpenAI API, so run
"pip set up openai" first consumer = openai.OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=
)completion = consumer.chat.completions.create(
mannequin=
, messages=
, stream=True
)
for chunk in completion:
textual content = chunk.selections[0].delta.content material
OpenRouter does value cash, however I used to be prepared to offer it a shot for the reason that costs for Llama 3B regarded extra affordable than state-of-the-art fashions like ChatGPT or Claude. On the time of writing, 3B is about $0.04 USD per million tokens, and a state-of-the-art LLM prices as much as 500x as a lot (ChatGPT-4o is $12.50 and Claude 3.5 Sonnet is $18). I might be scared to make use of ChatGPT or Claude at these costs, however I felt snug with the less expensive Llama 3B. What additionally gave me consolation was understanding I wouldn’t get up with an enormous invoice if there have been a sudden spike in utilization; OpenRouter lets me put in a hard and fast sum of money, and if that runs out my API calls merely fail quite than charging my bank card extra.
For some further peace of thoughts I applied my very own price limits: 1) Every person’s enter and whole chat conversations are restricted to a sure size to maintain prices underneath management (and to cut back hallucinations since smaller LLMs are likely to go “off the rails” as conversations develop longer); 2) Every person can ship just one chat per minute, which once more prevents overuse. Hopefully this isn’t an enormous downside for Python Tutor customers since they want at the very least a minute to learn the LLM’s response, check out advised code fixes, then ask a follow-up query.
Utilizing OpenRouter’s cloud API quite than self-hosting on my college’s servers turned out to be so a lot better since: 1) Python Tutor customers can get responses inside just a few seconds quite than ready 30-45 seconds; 2) I didn’t have to do any sysadmin/DevOps work to keep up my servers, or to put in writing my very own load balancing or queuing code to interface with Ollama; 3) I can simply strive totally different LLMs by altering a configuration string.
GenAI as a Thought Accomplice and On-Demand Trainer
After getting the “blissful path” working (i.e., when OpenRouter API calls succeed), I spent a bunch of time interested by error circumstances and ensuring my code dealt with them nicely since I needed to supply a superb person expertise. Right here I used ChatGPT and Claude as a thought associate by having GenAI assist me give you edge circumstances that I hadn’t initially thought of. I then created a debugging UI panel with a dozen buttons beneath the chat field that I may press to simulate particular errors in an effort to take a look at how nicely my app dealt with these circumstances:

After getting my stand-alone LLM chat app working robustly on error circumstances, it was time to combine it into the principle Python Tutor codebase. This course of took a whole lot of time and elbow grease, nevertheless it was easy since I made certain to have my stand-alone app use the identical variations of older JavaScript libraries that Python Tutor was utilizing. This meant that firstly of my challenge I needed to instruct Claude to generate mock-up frontend code utilizing these older libraries; in any other case by default it could use trendy JavaScript frameworks like React or Svelte that might not combine nicely with Python Tutor, which is written utilizing 2010-era jQuery and mates.
At this level I discovered myself not likely utilizing generative AI day-to-day since I used to be working throughout the consolation zone of my very own codebase. GenAI was helpful firstly to assist me determine the “unknown unknowns.” However now that the issue was well-scoped I felt far more snug writing each line of code myself. My every day grind from this level onward concerned a whole lot of UI/UX sharpening to make a easy person expertise. And I discovered it simpler to immediately write code quite than take into consideration learn how to instruct GenAI to code it for me. Additionally, I needed to know each line of code that went into my codebase since I knew that each line would must be maintained maybe years into the long run. So even when I may have used GenAI to code quicker within the quick time period, that will have come again to hang-out me later within the type of delicate bugs that arose as a result of I didn’t absolutely perceive the implications of AI-generated code.
That stated, I nonetheless discovered GenAI helpful as a alternative for Google or Stack Overflow types of questions like “How do I write X in trendy JavaScript?” It’s an unimaginable useful resource for studying technical particulars on the fly, and I typically tailored the instance code in AI responses into my codebase. However at the very least for this challenge, I didn’t really feel snug having GenAI “do the driving” by producing massive swaths of code that I’d copy-paste verbatim.
Ending Touches and Launching
I needed to launch by the brand new 12 months, in order November rolled into December I used to be making regular progress getting the person expertise extra polished. There have been one million little particulars to work by way of, however that’s the case with any nontrivial software program challenge. I didn’t have the assets to judge how nicely smaller LLMs carry out on actual questions that customers would possibly ask on the Python Tutor web site, however from casual testing I used to be dismayed (however not stunned) at how typically the 1B and 3B fashions produced incorrect explanations. I attempted upgrading to a Llama 8B mannequin, and it was nonetheless not superb. I held out hope that tweaking my system immediate would enhance efficiency. I didn’t spend a ton of time on it, however my preliminary impression was that no quantity of tweaking may make up for the truth that a smaller mannequin is simply much less succesful—like a canine mind in comparison with a human mind.
Luckily in late December—solely two weeks earlier than launch—Meta launched a new Llama 3.3 70B mannequin. I used to be working out of time, so I took the simple manner out and switched my OpenRouter configuration to make use of it. My AI Tutor’s responses immediately obtained higher and made fewer errors, even with my authentic system immediate. I used to be nervous concerning the 10x value enhance from 3B to 70B ($0.04 to $0.42 per million tokens) however gave it a shot anyhow.
Parting Ideas and Classes Discovered
Quick-forward to the current. It’s been two months since launch, and prices are affordable up to now. With my strict price limits in place Python Tutor customers are making round 2,000 LLM queries per day, which prices lower than a greenback every day utilizing Llama 3.3 70B. And I’m hopeful that I can change to extra highly effective fashions as their costs drop over time. In sum, it’s tremendous satisfying to see this AI chat characteristic dwell on the location after dreaming about it for nearly 15 years since I first created Python Tutor way back. I really like how cloud APIs and low-cost LLMs have made generative AI accessible to nonexperts like myself.
Listed here are some takeaways for individuals who need to play with GenAI of their private apps:
- I extremely advocate utilizing a cloud API supplier like OpenRouter quite than self-hosting LLMs by yourself VMs or (even worse) shopping for a bodily machine with GPUs. It’s infinitely cheaper and extra handy to make use of the cloud right here, particularly for personal-scale tasks. Even with hundreds of queries per day, Python Tutor’s AI prices are tiny in comparison with paying for VMs or bodily machines.
- Ready helped! It’s good to not be on the bleeding edge on a regular basis. If I had tried to do that challenge in 2021 through the early days of the OpenAI GPT-3 API like early adopters did, I might’ve confronted a whole lot of ache working round tough edges in fast-changing APIs; easy-to-use instruction-tuned chat fashions didn’t even exist again then! Additionally, there wouldn’t be any on-line docs or tutorials about greatest practices, and (very meta!) LLMs again then wouldn’t know learn how to assist me code utilizing these APIs for the reason that obligatory docs weren’t obtainable for them to coach on. By merely ready a number of years, I used to be in a position to work with high-quality steady cloud APIs and get helpful technical assist from Claude and ChatGPT whereas coding my app.
- It’s enjoyable to play with LLM APIs quite than utilizing the online interfaces like most individuals do. By writing code with these APIs you may intuitively “really feel” what works nicely and what doesn’t. And since these are atypical net APIs, you may combine them into tasks written in any programming language that your challenge is already utilizing.
- I’ve discovered {that a} quick, direct, and easy system immediate with a bigger LLM will beat elaborate system prompts with a smaller LLM. Shorter system prompts additionally imply that every question prices you much less cash (since they should be included within the question).
- Don’t fear about evaluating output high quality in the event you don’t have assets to take action. Provide you with a number of handcrafted exams and run them as you’re creating—in my case it was tough items of code that I needed to ask Python Tutor’s AI chat to assist me repair. For those who stress an excessive amount of about optimizing LLM efficiency, you then’ll by no means ship something! And if you end up craving for higher high quality, improve to a bigger LLM first quite than tediously tweaking your immediate.
- It’s very onerous to estimate how a lot working an LLM will value in manufacturing since prices are calculated per million enter/output tokens, which isn’t intuitive to motive about. One of the best ways to estimate is to run some take a look at queries, get a way of how wordy the LLM’s responses are, then have a look at your account dashboard to see how a lot every question value you. As an illustration, does a typical question value 1/10 cent, 1 cent, or a number of cents? No strategy to discover out until you strive. My hunch is that it most likely prices lower than you think about, and you’ll at all times implement price limiting or change to a lower-cost mannequin later if value turns into a priority.
- Associated to above, in the event you’re making a prototype or one thing the place solely a small variety of folks will use it at first, then positively use the very best state-of-the-art LLM to indicate off essentially the most spectacular outcomes. Worth doesn’t matter a lot because you gained’t be issuing that many queries. But when your app has a good variety of customers like Python Tutor does, then decide a smaller mannequin that also performs nicely for its value. For me it looks like Llama 3.3 70B strikes that stability in early 2025. However as new fashions come onto the scene, I’ll reevaluate these price-to-performance trade-offs.