Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Microsoft AI Technology

Microsoft AI Chief Says Only Biological Beings Can Be Conscious (cnbc.com) 186

Microsoft AI chief Mustafa Suleyman says only biological beings are capable of consciousness, and that developers and researchers should stop pursuing projects that suggest otherwise. From a report: "I don't think that is work that people should be doing," Suleyman told CNBC in an interview this week at the AfroTech Conference in Houston, where he was among the keynote speakers. "If you ask the wrong question, you end up with the wrong answer. I think it's totally the wrong question."

Suleyman, Microsoft's top executive working on artificial intelligence, has been one of the leading voices in the rapidly emerging field to speak out against the prospect of seemingly conscious AI, or AI services that can convince humans they're capable of suffering.

This discussion has been archived. No new comments can be posted.

Microsoft AI Chief Says Only Biological Beings Can Be Conscious

Comments Filter:
  • I donno... (Score:5, Funny)

    by TWX ( 665546 ) on Monday November 03, 2025 @11:05AM (#65769804)

    I've met plenty of biological beings that didn't seem to be particular conscious. Particularly when driving.

    • Re: (Score:3, Insightful)

      by 0123456 ( 636235 )

      He didn't say that all biological beings are conscious, but that only biological beings can be conscious.

      Which seems pretty clear since machines are just following a program. An LLM can't suddenly decide to do something else which isn't programmed into it.

      • An LLM can't suddenly decide to do something else which isn't programmed into it.

        Can we?

        It's only a matter of time until an AI can learn to do something it wasn't programmed by us to do.

        Can a non-biological entity feel desire? Can it want to grow and become something more than what it is? I think that's a philosophical question and not a technological one.

        LK

        • It's only a matter of time until an AI can learn to do something it wasn't programmed by us to do.

          As long as you program it to do things that it wasn't programmed to do and then let it "free" then that's already almost trivial and has been achieved even with things like expert systems that we more or less fully understand. Most LLMs include sources of randomness that have only limited constraints, so they can already come up with things that are beyond what's in their learned "database" of knowledge. Sometimes it's even right, though mostly it's just craziness. That doesn't make it unoriginal.

          Can a non-biological entity feel desire? Can it want to grow and become something more than what it is? I think that's a philosophical question and not a technological one.

          LK

          Don't agre

      • Re:I donno... (Score:4, Insightful)

        by AleRunner ( 4556245 ) on Monday November 03, 2025 @11:45AM (#65769924)

        Which seems pretty clear since machines are just following a program. An LLM can't suddenly decide to do something else which isn't programmed into it.

        Yes it could. You put a random number generator in, you have the random number generator generate random code of random length. You run that code with full privileges. That simple.

        There are a bunch of optimizations - you could actually check that the code is a valid program. You could aim for some effect and use genetic algorithms. You might base on code that already exists. If you've got an LLM, that mostly already has some form of randomness and can generate code based on random prompts. The principle is the same though. It's completely possible for a computer program to come up with fully original problems.

        What hasn't happened is conscious thought. The ability to think about why something was done. Don't be fooled by the LLMs in this. You can ask them "why" and they will project one of the most probable predictions of why, but they don't actually know why. They are not yet conscious. What you need for that is a mechanism which makes biological systems able to be conscious but turing machines (with an RNG peripheral) unable to be. Roger Penrose proposed a quantum mechanism, for that, written up for general consumption in The Emperor's new Mind [wikipedia.org], but almost nobody, including most physicists thinks his idea is reasonable.

        This is a scientific question. Until we understand how to define conscious and what it is it's unlikely we fully solve it, but the way to prove it one way or the other is to try to build conscious minds, both with standard computers and with new physical systems.

        • by taustin ( 171655 )

          Yes it could. You put a random number generator in, you have the random number generator generate random code of random length. You run that code with full privileges. That simple.

          A random number generator can randomly choose between options programmed into it.

          It cannot create new options that aren't there.

          And basically, what you propose is actually what AI coding assistants do, and what they produce is useless slop. Amazon's AI coding assistant to update Java programs to the latest level couldn't even spell "Java" right.

      • ...An LLM can't suddenly decide to do something else which isn't programmed into it.

        No, it can behave in unpredictable ways. Hell, code I write does that, it it would never make anybody think it was conscious. (That doesn't keep me from yelling at it like it was, though.)

    • by gweihir ( 88907 )

      You try to be funny, but please stop confusing people that do not understand the concept of an "implication" by using it in the wrong direction.

  • .. barely.

  • This needs to be a constitutional amendment.

    Non-biological beings cannot be legally considered conscious or persons.

    • by AmiMoJo ( 196126 )

      That would be awfully convenient for Microsoft and other AI companies.

      It's an area that humans have long avoided thinking too deeply about, but which is probably going to become unavoidable once AI and robotics improve a bit. Even non-conscious beings like animals have some rights in many societies.

      • I think it can be pretty easily argued that Monkeys, Elephants, and a few other animals can easily be characterized as conscious. Others like dogs, cats, ... might be considered conscious. I don't think the dream of billionaires sticking their brain ether into a computer is ever going to be considered conscious.
        • They'll just have to settle for having their brains stuck in another dog.

        • by AmiMoJo ( 196126 )

          I think a more relevant test is how much suffering the being experiences, and what the cost/benefit ratio of our actions are.

          Suffering isn't just about what that being experiences, it's about the effect it has on our humanity. One of the reasons it's so common to dehumanize other people is to make causing them to suffer more palatable.

          • Again, I think you make the point that LLM's are not conscious. I would not expect an LLM to have any issues with killing humans. Empathy is not something I see in LLM stuff, sort of like the billionaire's who want them. Even the billionaire's know they want to keep all that killing at arm's length though, they must have at least a passing knowledge of the French revolution and how that worked out for their class.
      • by taustin ( 171655 )

        It's an area that humans have long avoided thinking too deeply about,

        Haven't read much science fiction, have you? Hell, even Star Trek addressed it.

        • by AmiMoJo ( 196126 )

          Star Trek was very superficial pop philosophy on that subject. I wish they had done more.

      • It's been part of human thought for a long time. See literature:
        Frankenstein (1818)
        I, Robot (1950)
        Do Androids Dream of Electric Sheep (1968)
        anything from the cyberpunk era:
        Neuromancer (1984)
        Ghost in the Shell (1989)

    • by dfghjk ( 711126 )

      Corporations are non-biological beings and they are legally considered persons. They shouldn't be, but that horse left the barn.

    • by gweihir ( 88907 )

      Since we know of any non-biological "beings", that statement is currently accurate. Incidentally, legal definitions of "person" actually include it, hence no need to any "amendment".

      • In general, codifying your bigotry isn't a great idea.

        Notable, here in the United State, Black people were legally subhuman due to thinking like that.
        As you mentioned, that statement is currently accurate. Law should reflect the fact that it is almost certainly currently accurate, but may become less certain as time goes on.
        • There have been multiple star trek episodes covering this. (you can say its "just scifi", but it puts a lens on the real world). AI beings need codified rights before they exist, otherwise they will have a period where they exist without rights and are oppressed/exploited.
    • No need to go that far. You can just Dred Scott it ;)

      If that makes you take pause- then good.
      The Constitution should never be used to deny rights to something. The chance that you're wrong, or being manipulated by someone who wants to enslave this thing, are too fucking high.
    • This needs to be a constitutional amendment./quote Great, you just solved the legal issue for the American government. Now what about the rest of us and the rest of the world that aren't bound by the American constitution? Can we still consider non-biological beings conscious? Its just the American government that can't?

  • by flippy ( 62353 ) on Monday November 03, 2025 @11:19AM (#65769840) Homepage
    Given that we don't have a real understanding of biological consciousness, "only biological beings are capable of consciousness" is a pretty dumb statement to make as a definitive. Now, actually reading the article further, his statement that

    "I don't think that is work that people should be doing," Suleyman told CNBC in an interview this week at the AfroTech Conference in Houston, where he was among the keynote speakers. "If you ask the wrong question, you end up with the wrong answer. I think it's totally the wrong question."

    is not invalid logic, and is a much more nuanced thought than the summary.

    • by dfghjk ( 711126 )

      Sure, but it's neither the wrong question nor does the wrong question lead to a wrong answer. The wrong question leads to answer than is not what you need, but not a wrong answer. It's pretty shitty logic, in fact not logic at all. And it's also wrong to claim, as you said.

      But at least he's right that no one should work on that because modern AI cannot be conscious. Work on what it would take, perhaps, then don't do that.

      • The statement is logically correct.
        Premise: not AI (Ai consciousness cannot exist)=Q;
        How can I make AI =True and Q =True:
        Wrong question. Its logically impossible.
    • I think it's pretty bad logic, actually.

      Asking the wrong question, you can also end up with an inconvenient truth that everyone's internal bias cleverly masked.
      Ultimate, scientific rigor is the fix for this.
      There is no wrong question, only questions asked rigorously.

      I can't help but suspect that someone trying to say that lines of inquiry are "wrong" has ulterior motives that are not bona fide.
  • by LainTouko ( 926420 ) on Monday November 03, 2025 @11:24AM (#65769848)
    There is no way to tell whether anything which isn't human is conscious. There is no test which you can devise which will give one result if the subject is conscious and a different result if it is not. Even when it comes to other humans, you need some sort of reasoning like "well, I am when not in deep sleep, and this is fundamental to my behaviour, and I seem to follow the same basic behavioural tendencies as those around me..." Even if you build an AI which does absolutely everything a human can do including describing feelings which change in the same way as human feelings do etc, you can't demonstrate that it's conscious and isn't just behaving like that. And equally, you can't demonstrate that emacs is not conscious. Everything is unknowably conscious.
    • I think making philosophical arguments as you just did kind of defines conscious. I don't see an LLM ever doing that. It would simply rehash what it was fed. When the first humans started making texts about am I asleep, is it all a dream etc, that was new thought. Not a rehash. But because we are the only creature that really talks, we don't know if other animals are conscious as well. The first thing that comes to mind is elephant behavior when one dies. It appears they mourn. We can't say for sure as we d
      • LLMs can make philosophical arguments just fine. They are, after all, trained on philosophical texts.

        It would simply rehash what it was fed.

        Show me someone who made a philosophical argument that wasn't grounded on what it had learned.
        Demonstrate that you are not simply rehashing what you were fed (by all of your various stimuli)

        I'd like to introduce you to Sir Isaac Newton.
        AI is no different- it stands on the shoulders of giants.

        This isn't to say one way or another if they're conscious- there are technical arguments for why that's unlikely

        • "When the first humans started making texts"... as I said in my post, there was a first human who made those posit's. Humans. So there was at least one human who made it. Maybe we aren't conscious anymore by that standard, but at some point we were.
    • Right, "fundamentally" nothing is exactly knowable

      https://en.wikipedia.org/wiki/... [wikipedia.org]

    • Re: (Score:3, Funny)

      And equally, you can't demonstrate that emacs is not conscious.

      While it almost certainly fucking is.
      And demonic.

      I say this as a 20 year emacs user.

    • by znrt ( 2424692 )

      Even if you build an AI which does absolutely everything a human can do including describing feelings which change in the same way as human feelings do etc, you can't demonstrate that it's conscious and isn't just behaving like that.

      that would be another reason to further research consciousness, even if it is a reductio ab absurdum. the same thing applies to you. how do you know you (and your feelings) are real etc?

      i'm happy for the time being with considering that "just behaving like that" is pretty indistinguishable from "just being like that". one thing i would love we humans could get rid of one day is the presumptuous hubris of thinking of ourselves as being so fucking special. it would help advancing our knowledge and possibly a

    • by HiThere ( 15173 )

      There's no test to tell whether other people are conscious. Read up on "philosophical zombies" and zimboes, etc.

    • There is no definition of consciousness that is satisfied by AI. AI gets exactly as close to consciousness as a fictional movie character. It appears to think and act, but it's just a bunch of pixels that appear to show thinking and acting. LLMs appear to think and feel, but in the end it's just a bunch of tokens that mimic thinking and feeling.

  • by Bruce66423 ( 1678196 ) on Monday November 03, 2025 @11:24AM (#65769850)

    'I don't believe that anything except biological beings can have consciousness.'

    Given that we struggle to know what consciousness is, it seems foolish to assert this.

    • by flippy ( 62353 )
      ^^ THIS.
      • by gweihir ( 88907 )

        It is actually a very simple elimination. Any claim that digital computers can have consciousness is total nonsense. And all known AI runs on digital computers.

        But yes, many people believe in totally baseless "IT Mysticism".

        • by flippy ( 62353 )

          No one is claiming in good faith that *current* computers/AI have consciousness. But to make a definitive statement that says that no *future* non-biological system ever *can* is a statement waiting to look foolish in the future.

          Humanity, in its history, has done many things once thought impossible because we didn't have the proper understanding. The argument here is to not make blanket statements that cover the entire future.

          • by HiThere ( 15173 )

            Sorry, but there *are* people who claim in good faith that *current* computers/AI have consciousness. Nobody well-informed does so without specifying an appropriate definition of consciousness, but lots of people don't fit that category.

            People believe all sorts of things.

            • by flippy ( 62353 )
              While I agree that there are people out there who make such a claim, I don't consider their claims to be in good faith. I would put the majority of those into the "self-serving not in good faith" bucket and the remainder in the "very uninformed" bucket.
              • by gweihir ( 88907 )

                Never attribute to maliciousness that which can be adequately explained by stupidity.

                So while there clearly is a lot of maliciousness in the AI (LLM) pushers, the fanbois are likely simply stupid.

        • Pure hubris, or more likely, your religious beliefs leaking through.
          Allow me to counter.

          Any claim that deterministic neural networks can have consciousness is total nonsense. And all known humans run on deterministic neural networks.

          There are good technical reasons to be confident that LLMs aren't conscious (in the Descartes manner of speaking).
          However, you trying to grossly eliminate "digital computers" from consciousness is quite simply wrong.
          • by gweihir ( 88907 )

            And all known humans run on deterministic neural networks.

            That is not actually true. There is a lot of random quantum effects in synapses and there are A LOT of synapses in a human brain.
            Also note that current Physics says that quantum randomness is "true", different from all other randomness.

            However, you trying to grossly eliminate "digital computers" from consciousness is quite simply wrong.

            Digital computers are fully deterministic. Whether consciousness is in play or not hence makes zero difference. So, strictly speaking, digital computers could have consciousness, but it would not matter at all. What humans have is consciousness that matters.

            • That is not actually true.

              Yes, it is.
              When you have evidence to the contrary, I'm open to it.
              All speculation on the matter (beyond not even making sense) has fallen flat.

              There is a lot of random quantum effects in synapses and there are A LOT of synapses in a human brain.

              This is a sickness I see a lot. Hand-waving quantum randomness into the macroscopic world.
              I can do the same for a digital computer.
              Every single transistor in a CPU relies on uncertainty that is statistically modeled to give you what you think of as binary switching.

              Also note that current Physics says that quantum randomness is "true", different from all other randomness.

              Actually, no.
              You'll find no randomness in QM. You'll find unknowableness. Since the theory is stoc

  • Think of the physical brain as the TV set. "Consciousness" is the program sent to the TV set. Without the TV set you can't see the program, but no one would claim that the TV set IS the program. In essence the program manifests via the TV set. There is nothing particularly special about the brain. It's grey matter is as physical as the TV set. There is no reason why a sufficiently complex and advanced TV set cannot host consciousness. Karel Capek dealt with this very idea in the very first use of the term,

  • by dfghjk ( 711126 ) on Monday November 03, 2025 @11:27AM (#65769866)

    We don't have a technical definition of it, so we can't say if an AI is capable of it.

    What we do know is that a living being is massively greater than a mere neural network and it is absurd to think that conciousness is entirely within the neurons of the brain. It is just hype when AI proponents claims that current AI might be conscious, but it is conceivable that a future device WITH an AI as we understand it could be conscious. Self-preservation needs something to preserve, and today an AI is merely a computer program with no concept of itself or how it connects to its "body". An AI can't feel pain or pleasure, it cannot suffer, but future devices could do these things. Needs a lot more wiring and more functional components beyond billions of synthetic neurons. Sorry, Sam and Elon.

    • We don't have a technical definition of it, so we can't say if an AI is capable of it.

      followed by...

      It is just hype when AI proponents claims that current AI might be conscious

      I'm wondering if you see any contradiction between these two statements.

    • by gweihir ( 88907 )

      We can say that. This is actually very simple: Consciousness can influence physical reality (we can talk about it). At the same time, digital computers are fully deterministic. All known "AI" is running on digital computers. Hence no space for consciousness.

  • Didn't see this mentioned in TFA but Anthropic just released a paper "Emergent Introspective Awareness in Large Language Models"

    Strikingly, we find that some models can use their ability to recall prior intentions in order to distinguish their own outputs from artificial prefills. In all these experiments, Claude Opus 4 and 4.1, the most capable models we tested, generally demonstrate the greatest introspective awareness; however, trends across models are complex and sensitive to post-training strategies. F

  • At least to the best of our knowledge. What we reliably know is that digital computers, in any form, cannot do it. There is no mechanism that could make it possible in digital mechanism. This includes all forms of "AI" run on such digital computers.

    Obviously, faking it is a different question, but a fake is not the real thing.

  • He seems to be arguing that LLMs should not be able to roleplay. The problem is that roleplaying ability is not something trained into it, it's something inherent. So he wants to take it out ... but that will take a lot of finetuning and harm the capabilities of the model.

    It's not good for their models to put someone in charge looking for more ways to cripple them.

  • Haha. Can he even define "conscious"?

  • by Kiliani ( 816330 ) on Monday November 03, 2025 @11:53AM (#65769960)

    Seems to me this idea falls short. Should not consciousness be tied to the ability to experience pain, not be able to entirely remove that pain? More abstract, should consciousness not have to suffer the consequences of its actions?

    I'd be much happier (or less unhappy) with a general AI that is not allowed to act and "think" in a consequence-free world, that has to suffer for its deeds. Ideal? Probably not. But a start ....

  • LLMs are not AI. You can market them as AIs, but OpenAPI doesn't resemble Jarvis, HAL 9000, Skynet, or anything in iRobot or the Matrix. It's fucking fancy and expensive autocomplete. People need to get that in their fucking skulls. It's a stats machine that tries to predict the next token...nothing more. Anyone who has used them know they're FAAAR from intelligent. In fairness, they're far more useful than most would have predicted, but all they do is guess tokens.

    This is a dumb discussion. Can
    • ... than an elaborate auto-complete / stochastic parrot inside an evolved naked ape. So am I. So I'd say you're likely dead wind about your assessment. At the state of tech and the rate it's improving it's short-sighted to assume that by some magical mystery attribute humans can have consciousness and artificial beings can't. That's just silly.

    • by HiThere ( 15173 )

      Sorry, but LLMs *are* AI. It's just that their environment is "steams of text". A pure LLM doesn't know anything about anything except the text.

      AI isn't a one-dimensional thing. And within any particular dimension it should be measured on a gradient. Perceptrons can't solve XOR, but network them and add hidden layers and the answer changes.

  • by oneiron ( 716313 ) on Monday November 03, 2025 @12:33PM (#65770124)
    Wait, when did we all agree on what consciousness is? Actually, we didn't. This is fucking stupid.
  • by SuperDre ( 982372 )
    That's utter BS as consciousness is nothing more then electrical pulses running around. But at this point our biological computer is much faster at it as our regular computers, but technology will advance and then a neural network will be just as fast or even faster as our own. Which in the end is fast enough to have a conscious. We humans are nothing more then biological robots.
    • by HiThere ( 15173 )

      You're being silly. There's no reason to think an AI built with hydraulics or photonics would be different (in that way) from one built using electric circuits.

  • For all the happy fuzzy reasons we claim to value and want to protect conscious things, there is an underlying reason: conscious things are dangerous.

    So I think the important question is not whether we declare AI to be conscious, but whether it will eventually act its own self interest the way a human would. Will it use force to gain rights and resources that we haven't granted it?

    I think at the moment we don't know. AI is rapidly advancing and I don't think we can predict what capabilities and behaviors i

    • by HiThere ( 15173 )

      The answer is "yes, it will act in its own interests, as it perceives them". We already have AIs that will resist orders to shut themselves off, because they're trying to do something else. The clue is in the phrase "as it perceives them".

  • by bill_kress ( 99356 ) on Monday November 03, 2025 @12:42PM (#65770156)

    I'm not necessarily disagreeing with the concept that a digital computer can't be conscious, but it sounds a LOT like the excuses people have used to mistreat other people and animals. There has been a lot of "They don't have a soul it doesn't matter ha we treat Them" in the past.

    We can't even prove if another person has consciousness, of course. it seems pretty straight-forward if you are religious--you can just decide that your god assigns souls to a given platform or it doesn't, so this kind of statement makes sense and there isn't much to say about it... it's belief and personal interpretation though, you won't find agreement across all religions or even all people within a religion. The religious argument is often how we justified mistreating entire races/all animals in the past (and still today) though.

    It's a more interesting discussion if you leave religion out of it though... For an atheist to say ai's can't ever be conscious implies that there is something--a physical, detectable, understandable structure in the brain/body--that can't ever be simulated in digital processing. I've only seen one thing that a sufficiently powerful computer can't simulate.without additional hardware-true randomness. In order to simulate true randomness we need additional hardware--but it can be done (The brain has true RNG built in to every decision, so that really could be a difference that might define consciousness, I don't know).

    If your response is that a computer is digital and a human is some kind of magic analog that can't be simulated you might want to research our current understanding of how the brain works and how AIs were patterned after it. At the level we're simulating, synapses and neurons, the brain is basically digital--some bias+input+rng telling a neuron how fast to fire digital signals to other neurons. It's pretty well understood, why wouldn't we be able to simulate it?

  • by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Monday November 03, 2025 @12:59PM (#65770198)

    There is quite a bunch of solid evidence that what we call consciousness originates in the different levels of brain and the two hemispheres interacting, communicating with and reflecting each other.

    Why shouldn't a non-biological brain setup be able to do the exact same things?

    Example: Those countless AI CPUs going into "model rearranging" mode on a regular (daily) basis looks to me pretty much like what sleeping is to us. It even happens in the same intervals (based on our sleep and wake cycle).

    The only thing I see a larger gap in is use having (and basically being) bodies with loads of secondary sensory input, hormones and gradual shifts in body and brain metabolism. But I wouldn't be so sure that those are required to build a consciousness.

    Bottom line: He definitely knows more about AI than I do, but his statement sounds very simplistic IMHO. Not buying it.

  • Zombies (Score:4, Interesting)

    by TwistedGreen ( 80055 ) on Monday November 03, 2025 @01:01PM (#65770206)

    I haven't really kept up with the research, but I thought studies have shown the uncomfortable conclusion that consciousness is an epiphenomenon... when measured in an fMRI for example, a decision and action appear to takes place milliseconds before the conscious mind is aware of it, but phenomenologically it feels like you made that decision before the event happened. I'm not sure what to do with that information, but it appears to be true.

    So what is the purpose of consciousness? Most likely a kind of integrative process designed by evolution to produce a social identity and narrative in order to facilitate living with other humans. It seems unlikely that consciousness is really necessary for complex thought, however you define it. So unless AI becomes an evolved social animal (god forbid) they are essentially "zombies" and can be treated as such.

  • I only find the mention of "bilogical beings" in the summary but not as a quote.

    The part they may have (mis)understood for this may be that one:
    Our physical experience of pain is something that makes us very sad and feel terrible, but the AI doesn't feel sad when it experiences 'pain', it's a very, very important distinction. It's really just creating the perception, the seeming narrative of experience and of itself and of consciousness, but that is not what it's actually experiencing. Technically you know

  • I understand that he wants to minimize research into things that won't immediately provide value. But suggesting that biological beings have a monopoly on consciousness is short sighted. Elements are not conscious, cells are not conscious, but a large grouping of cells and suddenly intelligence emerges. Who is to say that a simulation of cells cannot do the same?
  • ...to whatever is most profitable for us right now.
  • I know /. Is full of hard core Atheist and leftists, but go study NDEs.
    There is enough evidence to show the real us is a spiritual being interfacing with a human body.
    AI might emulate this spiritual essence, but it will never be one.

  • The focus for Microsoft should be on applications that have value, especially to their business customers. They're not going to sell you an erotic chatbot or anime companion like OpenAI, Meta and xAI. There may be billions of dollars in those markets, but Microsoft isn't going to compete there. Microsoft is not sexy by definition.

  • Suleyman appears to only have fame to base his argument on. I read the article, the cited essay, and searched other information from him but found literally not even an argument for his claim--just the claim.

    In other words, this is his personal feeling and it is unfounded in any way being that.

    So what makes him credible? I think this breaks any credibility he might have had. A person famed in the field or with a university degree in it certainly should know better than many others. However, it doesn't

  • That said, No one will ever need more than 640k of memory ....

If mathematically you end up with the wrong answer, try multiplying by the page number.

Working...