This is a cache of https://news.slashdot.org/story/24/04/06/0541216/ais-impact-on-cs-education-likened-to-calculators-impact-on-math-education. It is a snapshot of the page at 2024-04-07T01:13:35.463+0000.
AI's Impact on CS Education Likened to Calculator's Impact on Math Education - Slashdot

Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Education AI Programming

AI's Impact on CS Education Likened to Calculator's Impact on Math Education (acm.org) 43

In Communication of the ACM, Google's VP of Education notes how calculators impacted math education — and wonders whether generative AI will have the same impact on CS education: Teachers had to find the right amount of long-hand arithmetic and mathematical problem solving for students to do, in order for them to have the "number sense" to be successful later in algebra and calculus. Too much focus on calculators diminished number sense. We have a similar situation in determining the 'code sense' required for students to be successful in this new realm of automated software engineering. It will take a few iterations to understand exactly what kind of praxis students need in this new era of LLMs to develop sufficient code sense, but now is the time to experiment."
Long-time Slashdot reader theodp notes it's not the first time the Google executive has had to consider "iterating" curriculum: The CACM article echoes earlier comments Google's Education VP made in a featured talk called The Future of Computational Thinking at last year's Blockly Summit. (Blockly is the Google technology that powers drag-and-drop coding IDE's used for K-12 CS education, including Scratch and Code.org). Envisioning a world where AI generates code and humans proofread it, Johnson explained: "One can imagine a future where these generative coding systems become so reliable, so capable, and so secure that the amount of time doing low-level coding really decreases for both students and for professionals. So, we see a shift with students to focus more on reading and understanding and assessing generated code and less about actually writing it. [...] I don't anticipate that the need for understanding code is going to go away entirely right away [...] I think there will still be at least in the near term a need to understand read and understand code so that you can assess the reliabilities, the correctness of generated code. So, I think in the near term there's still going to be a need for that." In the following Q&A, Johnson is caught by surprise when asked whether there will even be a need for Blockly at all in the AI-driven world as described — and the Google VP concedes there may not be.

AI's Impact on CS Education Likened to Calculator's Impact on Math Education

Comments Filter:
  • that could teach you math, and provide solutions to math problems.
    • by fibonacci8 ( 260615 ) on Saturday April 06, 2024 @02:53PM (#64375186)
      No, it's more like a calculator that can copy someone else's homework, present that to you, claim that that's close enough to teaching, and doesn't have any recourse when it's incorrect.
    • We already have calculators that do that. When I was in community college 20 years ago I had a ti-92 that would solve integral equations. Basically made much of calculus 2 pointless.

      • by bn-7bc ( 909819 )
        Well iirc rge answers that came out of the tis where rather easy to spotas the calculatot did some stuff that was computationally eficient but a pita to do nanually, so the answer the calc gave while correct always looked rather different than what we got epwgen doing it manually ( eys the manual answers were correct} at amy rate you allways needed to show what steps you used and the calc ( at least in it's default state was unable to do that
      • Well, if you did all your homework with that calculator and never had to show your work, did you actually learn anything? In college at least, it's assumed that the student _wants_ to learn, because the student is paying for the education instead of being forced to go. Now you get to Calculus 3 and they don't allow calculators or you have to show all your work, then do you complain to the professor that it's unfair because you never learned how to do integration?

    • Did a quick test by asking ChatGPT to so a simple division by pi. Apparently it used a value close to pi = 3.142507.

  • Nope. (Score:5, Informative)

    by Nrrqshrr ( 1879148 ) on Saturday April 06, 2024 @02:45PM (#64375170)

    I disagree.
    The calculator gives you the correct answer. If your math is wrong you will get an answer that makes no sense or straight up an error.
    The "AI" gives you an answer and tries hard to convince you it's correct. You can only tell whether it's a good answer or not if you're capable of writing the good answer yourself, anyway.

    • by Njovich ( 553857 )

      Ok, tell me one question that would realistically be asked in school that ChatGPT 4 will get wrong?

      • by ffkom ( 3519199 )
        There were plenty of news on examples, like https://www.businessinsider.co... [businessinsider.com]

        Of course, once this year's test questions have become part of the training data, it will likely not fail them again.
        • by Njovich ( 553857 )

          This article was from before ChatGPT 4 and it seems version 4 gets them all correct. How about an actual example instead of what version 3.5 could not do (which is a vastly inferior product).

    • This should not be marked informative. The electronic calculator actually does not give you the correct answer. So the distinction you are trying to make with AI is only a matter of degree.

      Your last point is quite agreeable though. In all such cases, it is necessary to write the actual answer down first, if you want to assess if the computer answer is within tolerance. The trick is to find a way to write the actual answer which doesn't require writing the actual answer explicitly. That way the compariso

    • Posted without Karma bonus, generated by ChatGPT 3.5.
      Me. "Why is AI better than a calculator?"

      "ChatGPT @ openai.com

      AI and calculators serve different purposes and have distinct capabilities. However, AI can often outperform calculators in various tasks due to its ability to learn, adapt, and handle complex scenarios. Here are some reasons why AI can be considered better than a calculator in certain contexts:
      Adaptability: AI systems can adapt to new data and situations, whereas

    • Posted without Karma bonus, generated by ChatGPT 3.5.
      Me. "Why is a calculator better than AI?"

      "ChatGPT @ openai.com

      While AI possesses numerous advantages over calculators, there are situations where calculators may be considered better suited for certain tasks:
      Speed and Precision: Calculators are designed specifically for numerical computations and can perform calculations quickly and accurately. In scenarios where speed and precision are paramount, such as during exams, financi

  • by paul_engr ( 6280294 ) on Saturday April 06, 2024 @02:45PM (#64375172)
    Maybe if the calculator cost $100,000 amd gave you wrong information like 2+2=applesauce most of the time.
  • by AlanObject ( 3603453 ) on Saturday April 06, 2024 @02:56PM (#64375196)

    Why would someone want to develop coding skills when we are fast approaching the era where systems are programmed pretty much the way the computer on the Starship Enterpise is programmed? Just say what you want and it does the rest. Nobody will need "coding sense."

    Already I rely on GPT (and there are better ones out there) to speed up my projects. Sure I can do Python/C/Java/Typescript/HTML/CSS (or whatever) by hand the same way I can still do long division by hand but this isn't a matter of aesthetics or spirituality anymore. It is about getting the job done and getting paid.

    GPT makes plenty of mistakes and hallucinations (as they say in AI) so it doesn't relieve anyone from knowing what they are doing. Currently. But it is still a productivity enhancer and it is quite easy to see the future where you don't even need to look at the produced code any more than you need to look at the machine instruction output from a compiler.

    The kind of software engineering that we knew of for my career -- more than 50 years now -- will soon be a part of the past. There will always be a need for and plenty of engineers around that can do low-level programming, but that will become a specialty. Or a hobby for some. All those popped-up "learn coding" schools will disappear. There will be some standard courses in public schools K-12 and then curriculum at the college level for the needs of the time.

    • by gweihir ( 88907 ) on Saturday April 06, 2024 @03:06PM (#64375220)

      Naa, the simplistic things ChatGPT can do now are already pretty much the maximum possible. For example, if you want to, say, convert Latin-1 to UTF8 in Python, ChatGPT can save you a few minutes of looking it up. But that is about it.

      so it doesn't relieve anyone from knowing what they are doing. Currently.

      Actually, that problem cannot be fixed for LLMs. It is a fundamental property of the approach used. The other thing is that current LLMs are not early prototypes. They are pretty much the end-result of a long and tedious optimization process and all the easy things are already in there. So no easy wins to be had anymore.

    • "it doesn't relieve anyone from knowing what they are doing"

      I think this is the issue. Let's just start a fight right now. The same people that insist that you can't do "enterprise computing" without systemd (Ubuntu and RedHat come to mind)... Sure there are some functionality of systemd that is attractive for larger installations BUT it all comes with the humongous blobs and bugs of systemd, and an insane amount of overhead to get your python program to print "hello world". The cost benefit ratio of system
    • by cowdung ( 702933 )

      The guy that is improving ChatGPT probably has some coding sense.

  • Pro-tip: Doing calculations is not math. It is just _applying_ some math.

    • Pro-tip: Math isn't logical if it can use imaginary numbers. It's just making up numbers.

      • by gweihir ( 88907 )

        Even worse: _All_ of math is "made up". Do not believe me? Here is an example: Try counting Apples. Are any of those identical? No. The whole idea of "counting" is a made-up fantasy that ignores almost all aspects of reality.

        • by jbengt ( 874751 )

          Try counting Apples. Are any of those identical? No. The whole idea of "counting" is a made-up fantasy that ignores almost all aspects of reality.

          Only if you use a definition of apple that requires each apple to be identical in all respects.

  • 1. Train AI from the human-built internet.
    2. PROFIT.
    3. AI destroys the human-built internet.
    4. ???

  • Hey Slashdot... (Score:5, Insightful)

    by 26199 ( 577806 ) on Saturday April 06, 2024 @03:28PM (#64375266) Homepage

    ...how about we fight the enshittification of the English language a bit here?

    Writing code has very little to do with "computer science".

    You can call it "software engineering" if you like but there's no such discipline, I prefer "software development".

    Does AI help you develop software or prevent you from learning to develop software? Ehm ... not really?

    • Does AI help you develop software or prevent you from learning to develop software?

      Seems to be both. Regardless, very well put.

    • by Anonymous Coward

      Does Rust help you develop software or prevent you from learning to manage pointers correctly?

      • I don't get the fuss about Rust. Java was designed to solve those problems and more 30 years ago. We have two generations of programmers who are way beyond understanding and using pointers, because they simply never had to worry about such things in Java.

        There are way more interesting and advanced safety questions out there. Even Perl had the concept of tainted variables.

        Maybe Rust is a case of NIH syndrome?

  • Cut the crap! (Score:2, Insightful)

    by Anonymous Coward

    Unless there is a floating point error, calculators do not hallucinate answers.

    • by PPH ( 736903 )

      calculators do not hallucinate answers

      Try some complex expressions given as PEMDAS puzzles. Even different versions of the same brand and model calculator give different answers on occasion.

  • But we need 'experience' before we can architect anything, right? Or was that all a lie this whole time? Just a way to haze the younger people into bleeding their life for the company?

    Is possible to create a good system without ALSO creating a lot of bad systems first?

    Or is it better to think of this like 'designing for manufacture' versus blacksmiths? Where there are different ways of making the same result (at different scales, with very different tools).

  • Or maybe I've just been watching too much British TV. I read that as cack'em (kill them).

  • People keep talking about how wonderful AI generated code is, and how it's revolutionary, but I still haven't seen any real life examples of working AI-generated code, or any AI-generated code in a language that's not Python.

    Can anyone point me to some examples of AI generated code actually existing or doing something?

  • by cowdung ( 702933 ) on Saturday April 06, 2024 @05:35PM (#64375562)

    As someone who taught programming for 15 years, I find LLMs a bit problematic. Since a common strategy was to give our students small problems to solve like:

    - write a method the prints hello 10 times
    - write a method that finds the even numbers in an array
    - etc..

    These sort of little exercises helped them understand things like the for loop and other basic concepts. The problem is that today you can feed that into ChatGPT and it probably would spit out the solution. Now, you could say that the student shouldn't do that. And cheaters never learn. But cheating has just become much easier and the temptation is great.

    The same goes for problems where you complete the missing code, fix the bug in the code, or a small project like write up a calculator like the one Windows or Minesweeper.

    Teachers will need new teaching methods, since ChatGPT can lift the weight of the simple problems. But if you can't solve simple problems, how do you develop the skills to solve the harder ones?

    • ... since ChatGPT can lift the weight ...

      The problem with lexical guessing (LLM) isn't GPTs citing and re-using their own garbage: It's every (accurate) answer will be fed into a neural network: An LLM will know all the answers to all the training problems. So, a student can cheat their way through the first 2 or 3 semesters of every subject. That's a bigger problem because only the core competency (eg. software development, accounting, sonograph analysis) extends through 4 semesters. Out-of-classroom assessment will not be possible because

    • Teachers will need new teaching methods, since ChatGPT can lift the weight of the simple problems. But if you can't solve simple problems, how do you develop the skills to solve the harder ones?

      They could do what one of my professors did when he got two identical answers to a test question. Called us in separately and asked us how we’d approach solving a similar problem. When I got it right it was clear I was not the one that copied an answer.

      Have them explain the logic behind their answer. Sure a student could ask Chat-GPT the same question, but then again so could the professor.

  • > "Johnson is caught by surprise when asked whether there will even be a need for Blockly at all in the AI-driven world as described — and the Google VP concedes there may not be."

    haahah Not many people imagine that AI will replace what *they* do, only what *other people* do.

  • ... understanding and assessing ...

    That comes from learning what breaks the machine: Learning when human intellect ignores the rules controlling a machine. A calculator didn't eliminate the need for recognizing, filtering and organizing the elements of a problem. Now, the student doesn't have to make the machine fail, a (trained) LLM can regurgitate the answer, no filtering, organizing, or thinking involved.

If you didn't have to work so hard, you'd have more time to be depressed.

Working...