David Patterson Says It's Time for New Computer Architectures and Software Languages (ieee.org) 360
Tekla S. Perry, writing for IEEE Spectrum: David Patterson -- University of California professor, Google engineer, and RISC pioneer -- says there's no better time than now to be a computer architect. That's because Moore's Law really is over, he says : "We are now a factor of 15 behind where we should be if Moore's Law were still operative. We are in the post -- Moore's Law era." This means, Patterson told engineers attending the 2018 @Scale Conference held in San Jose last week, that "we're at the end of the performance scaling that we are used to. When performance doubled every 18 months, people would throw out their desktop computers that were working fine because a friend's new computer was so much faster." But last year, he said, "single program performance only grew 3 percent, so it's doubling every 20 years. If you are just sitting there waiting for chips to get faster, you are going to have to wait a long time."
Yes, even M0AR Languages (Score:3, Insightful)
We've only had three new ones come out this week. We need M0AR! M0AR languages!! M0AR syntaxes!!
M0AR of all the things!
In fact, it should be a requirement for all CS majors to develop their own language before graduations, so everyone can be *THE* subject matter expert in a language. That would be awesome. Everyone would be able to charge $500/hr for being the ONLY expert in their language.
What could be wrong with this??
Re: (Score:2)
The belief in syntax as immutable is wrong - it's a tool like any other, change it if it holds you back. I'm thinking now about the continued wedding to things like C etc. - they have their place, but they
Starting in 2005 (Score:4, Interesting)
A SPECint graph shared on Quora shows this slowdown starting back in 2005.
https://qph.fs.quoracdn.net/ma... [quoracdn.net]
Re: (Score:2)
"Moore's law is the observation that the number of transistors in a dense integrated circuit doubles about every two years." - Moore's law [wikipedia.org]
Re: (Score:2)
That graph seems to confirm Moore's law is still holding.
Yep. It leaves off the multi-core performance which should follow the transistor count growth.
Re: (Score:3)
From wikipedia: Intel stated in 2015 that the pace of advancement has slowed, starting at the 22 nm feature width around 2012, and continuing at 14 nm. Brian Krzanich, the former CEO of Intel, announced, "Our cadence today is closer to two and a half years than two." Intel is expected to reach the 10 nm node in 2018, a three-year cadence.
So Moore's Law is slowing from 2 to 3 years.
Re: (Score:2)
Which slipped to 4Q2019 [extremetech.com], with little prospects of an (Intel-scale) 7 nm process following on any reasonable timescale.
3 years my
Re:Starting in 2005 (Score:5, Insightful)
People confuse Moore's law with performance. Moore observed that the total number of transistors on a chip was doubling every 18 months. For a long time, that meant that the clock frequency was also doubling.
Then, a nasty habit of physics to smack us in the phase --- err, face --- came along in the form of speed of light limitations. Given the size of contemporary chips, it just is not (and is unlikely to ever be, if what we know about fundamental physics is correct) possible to communicate from one side of a 1 cm die to the other much faster than in the range of a handful of gigahertz clock speeds, give-or-take. Even with photons going in straight lines in perfect vacuum (none of which happens on a chip) the best you could hope for would be a 30 GHz clock rate, a paltry ten times faster than today's CPUs.
One obvious solution is to make circuits that are smaller, and thus we started to get more CPUs on a single die. Still, those CPUs need to synchronize with each other, the cache system, etc., so there remain chip-spanning communications constraints.
The limits on the size of transistors, and thus perhaps on the total number on a chip, are looming but haven't arrived yet. The limits of raw clock speed most definitely have. It is safe to say that our chips will continue to get faster for a while, but the heady days of generation-to-generation massive improvements in single-thread CPU performance are over.
Re: (Score:3)
In Star Trek TNG they used warp fields to enable data transfers faster than light speed. (Yes I know warping & FTL is just fiction.)
And? (Score:2)
Re: (Score:2)
Re: (Score:3)
We have hit that wall. This technology is mature and everything it can do well, it can do now at close to maximum-possible speeds. Sure, software sucks today and coders are mostly incompetent and there is some speed increase to be expected from that angle, but that is it. That there was an area of seemingly exponential growth does in no way imply it will continue without limits. And it does not.
Re: (Score:2)
Re: (Score:2)
I completely agree to that.
Turns out that in this the past is not a reliable predictor of the future, as is true in any other area.
Re: (Score:2)
We have hit that wall.
Not really. We have come to an obstacle that requires a change in tactic. Processing power is still increasing at Moore's law but only for multi-core applications. Once we adapt to the new paradigm where to double your performance you need to double the number of cores, things will pick up again. However, this is a really hard change to make and it is going to take some time to adapt.
Re: (Score:2)
Re: (Score:2)
It does not really matter, because we will not get it going much faster than today anyways. There is no "new architecture" or "new language" that will change that. Massive parallel systems failed last century and did so several times. Vector architectures have just hit the same brick wall as conventional ones. There really is nothing else.
Re: (Score:2)
Magic?
No. Not going to materialize and even if it did, it would not accelerate most tasks, just a small set of very specific ones.
Re: (Score:2)
Magic?
No. Not going to materialize and even if it did, it would not accelerate most tasks, just a small set of very specific ones.
But, if it works, there are tasks such as building predictive trees for applications that could speed up processing. For example, a GPU is is a processor dedicated to the very specific task of graphics processing (though we have found other uses, such as bitcoin mining). It's quite possible that we will find specific tasks that a quantum processor can handle much faster than a CPU. Offloading those tasks would provide a performance gain.
There is also the likelihood that there are other processes that can
Re: (Score:3)
Re: (Score:3)
When is the last time gaming was held up by processing?
Factorio? Kerbal Space Program? Dwarf Fortress? Any game that does heavy simulation?
Re: (Score:2)
Bring back 'capability machines' (Score:5, Interesting)
I worked on the BiiN project. https://en.wikipedia.org/wiki/... [wikipedia.org] A 'capability' was a specific -hardware protected- feature that was set up to be unforgeable and contain access rights. This computer architecture approach date back to the Burroughs 6500 https://en.wikipedia.org/wiki/... [wikipedia.org] and even back to some aspects of MULTICS.
They're definitely not von Neumann architectures, since a capability pointing to executable code is a very different thing than a capability pointing to data. In many respects, these would be "direct execution engines" for object-oriented languages (even C++, with some restrictions on that language's definition).
A huge part of this is getting over the illusion that you have any clue about the (set of) instructions generated by your compiler. If you're working on a PDP-8 or even PDP-11, C might well be close to 'assembly language'. But with the much more complex instruction sets and compiler optimizations to support those instruction sets, most languages are far removed from any knowledge of what the underlying hardware executes.
Re: (Score:2)
I have a few wafers of the 432 in my desk draw at work.
Re:Bring back 'capability machines' (Score:5, Interesting)
Are you familiar with the proposed Mill architecture? Thier work with what they call turfs and portals sound very similar. It allows secure calls accross protection boundries, hardware managed data stacks, unforgeble process ID's, and byte-level permission granularity. It's definately not a RISC machine, but it's not a C machine either with hardware features that treat pointers as type of thier own which contain hardware managed meta-data bits usefull to accelerate garbage collection.
That is really nonsense (Score:2)
Various alternate architectures have been tried out over the decades. A lot of other programming models have been tried out as well. They all basically failed or live on only in niches because people could not hack coding for them.
Performance increases for most tasks are over. Deal with it and stop proposing silver bullets. It only makes you look stupid.
Re: (Score:2)
Of course it's text, what else would we use. Written languages are like 5,000 years old and have always been more expressive than pointing and gesturing.
Sure there are a few visual programming suites, but they are best suited for stream processing, and not for heavily branched general purpose code.
And a brain interface is a much harder problem, and trusting AI to write code could have devastating results.
Exponential growth always hits an inflection point (Score:5, Insightful)
Re: (Score:2)
Of course, but have we hit that inflection point? By all accounts were only slightly behind the times in transistor count in ICs with them doubling every 3 years now instead of every 2. Still very much a large logarithmic gain.
Software Devs (Score:5, Interesting)
This all points back to software devs. I've spent 2 decades dealing with low-level drivers and optimizations in assembly language. Now, not that I would expect developers to write assembly language, the problem I run into is that software developers of high level languages can't even write efficient code at their level. On top of that, they don't even understand how the language stack works, what code constructs give better performance in one language versus another. In addition, they can't even profile their code anymore or look at logs.
If anything needs changing, it's software developers first. They keep eating up all the computer resources and say "get more this/that for your computer." No, pull your head out of your 4th point of contact and learn to write efficient code. We were doing this shit in the 90s all the time. We even advertised for assembly programmers in NEXT Generation magazine, constantly!
While there's nothing wrong with using high-level languages, programmers today have lost the art of what it means to be lean and mean. I don't hire any developer unless they can demonstrate they know the stack for the language in which they use.
Me: "Oh, no assembly language experience?"
Applicant: "Oh, no. Is that required here?"
Me: "In rare cases, but I'm trying to understand if you even understand how a computer works at a fundamental level. In fact, have you ever worked with state diagrams?"
Applicant: "No."
Me: "Okay, you write an application that simply opens a file. What are the failure modes of your application and the opening of the file? Can you draw a state diagram for this?"
Application: "A flow chart?"
Me: "No, a state diagram. Given a set of inputs, I want you to diagram all outputs and failure modes for each state."
Applicants could answer these questions in the 90s and early 00s, but rarely anymore. I blame software devs for this problem. Hardware engineers are always having to pick up the slack and drag everyone up hill because software devs can't pull their own weight.
When will my laptop be a quantum computer? (Score:2)
Re: (Score:2)
Quantum computers will be slower for most tasks. There are some tasks for which they will be much faster.
The problem is nobody noticed it (Score:2)
Re: (Score:2)
No, we can't. That is the point. The processor you get next year will only we marginally faster than the one from this year.
Re: (Score:2)
No, we can't. That is the point.
Define faster. The single core performance may only increase by a few percent but the number of cores keeps increasing. So if your algorithm can use multiple cores it will be faster if not then it won't.
Re: (Score:2)
Ditto. My Surface Book Pro is only faster than the Thinkpad it replaced due to faster SSD architecture. Both were bought at a $3k price point (work allowance) and gotta say I prefer the lighter SB even if its keyboard and no mouse nipple almost suck.
Re: (Score:2)
Moore's Law, parallelism and single thread (Score:2)
Can't read the link so I assume it's about parallelism.
I think we welcome languages that encourage users to divide a problem into many smaller ones. But do we really need them?
What I mean to say is that the value of software lies in the APIs and libs you develop. Having it perform well in a parallel environment takes a bit of clever thinking but most of us will hack it.
There are quite a few programming models and frameworks that already allow astonishing things to happen in parallel. What is Patterso
The 30 million line problem (Score:2)
Re: (Score:2)
I'm not sure I understand, but I don't really see the point in having individual applications be bootable on hardware. If anything, it'd make more sense to me to push more stuff from the OS into the firmware so that the firmware would present a standard set of APIs/protocols and the OS wouldn't need to worry about drivers. And then, in turn, standardize APIs across operating systems so that cross-platform apps would be easier.
Either way, good luck getting any meaningful change out of the computing indust
Re: (Score:2)
Re: (Score:2)
Maybe we can eliminate shared libraries (Score:4, Funny)
Moore's law is working more or less fine, thanks (Score:2)
Transistors are doubling every 24 months or so, on par with moore's original enunciation of the law, and slightly off the 18 months of his revision of said law.
What is not working anymore is _*"People's Interpretation"*_ of said law, that dictate that computers sould be 2x faster every 18 months. Moore never said that. He only said that in a given sqr centimeter of silicon, the optimum number of transistors would double every 24 months. then he latter revised the number to every 18 months.
When Moore's law w
IT'S TMIE (Score:2)
The singularity.. (Score:5, Funny)
Faster and Faster Computers for Bigger Systems (Score:2)
Why is is that there are always those people on here that think we don't need anything new or faster?
Have you never run any development system and think, OMG, why is this taking so long?
We need way faster CPU's and computers, In fact we need Quantum computers.
I would love to compile 1.3 million LOC for 10 different platforms in 3 seconds.
The faster we go, the bigger the systems are they we can build. Try running a Neural Network the size of your brain!
A NN with 10^11 (one hundred billion) and 10^15 synaptic
My big concern is nothing about speed (Score:2)
Intellectual dishonesty (Score:2)
Patterson's argument is blatantly intellectual dishonest... he talks about single program performance as if single programs are never parallel these days. He mischaracterizes Moore's law as being about single program performance (his creative definition). It is not, it is about transistor density, which continues to increase roughly according to Moore's law, and with no end in sight. Sure, process node shrink is slowing down, but parallelism is increasing rapidly, roughly balancing that. And 3D stacking is
Chip marketing? (Score:2)
How much of this is slowdown is marketing driven? There's no reason to release a chip that's 50% faster if people are buying plenty of the older chip. You want to spread that out over time.
Re: (Score:2)
I would say that architecture is changing, at least for production systems. It's all about scaling horizontally instead of vertically. Sure, individual cores aren't much faster, but a couple years ago I launched 30,000 cores in two minutes on AWS, and about a year ago EC2 Spot announced a million-core stunt of some sort.
A million cores from COTS technology is a lot of performance.
Re: (Score:2)
More cores are worthless for most tasks. Takes some actual knowledge to see that though.
Re: (Score:2)
You just need to be smart enough to learn how to solve your problems with more cores!
My, these assertions are easy to make.
Re: (Score:2)
And if your system has more than 1 user (at the same time)?
Re: (Score:3)
Re: (Score:2)
More cores are worthless for most tasks.
Thats the whole point of the article. Getting more work out of your cores.
Re: (Score:3)
But this is the thing. I work on an asynchronous back end application written in javascript (stupid design decision, I wasn't around at the time). It uses all of one core and nothing from other cores so there is little I can do to give it more resources to work on. If the JS virtual machine could cope with working across multiple cores, then all would be good.
Thats what this article is about, and its a good question.
Re:so go do it, David (Score:5, Insightful)
I am pretty sure David Patterson is out there doing it. He is a professor in the field who has accomplished plenty. He is 70 now and is likely past his academic prime, so now he is doing what he should be doing at this time in his career: teaching, mentoring, and inspiring the next generation.
Re: (Score:2)
Sounds more to me that he cannot let go of the old, failed ideas and face the reality that there is no silver bullet. There is plenty of aging CS professors around with that problem.
Re: so go do it, David (Score:5, Insightful)
Show me an architecture that would make me want it. With C++ as a platform language
"Show me a new vehicle to replace the carriage. It has to include a horse."
Re: (Score:2)
Re: (Score:3)
CLR
Re: so go do it, David (Score:4, Interesting)
The oxymoron here is that David uses hardware performance to substantiate his cliam.
The computer revolution went askew taking the hardware track leaving software to rot in 1960â(TM)s state of the art computer science.
The next revolution is soft not hard.
Re: so go do it, David (Score:4, Interesting)
My whole life the main factor leading to people accepting sucky software is that the hardware is always getting better, and by the time they ship it it runs "fast enough" on the new hardware.
I've been anticipating this for decades; eventually computing power is "good enough" that people start actually trying to write good software. In my view the hardware is at that point, and time is ripe for software changes. And this will lead to tool improvements, to be sure. Architectures are likely to change, because a big part of what goes into existing systems is exactly this desire for the architecture to be relevant for a predetermined amount of time! If the hardware isn't improving and you can't just sell people the new version every few years, that radically changes the design considerations for the whole system. So far, that hasn't happened at the consumer level, and I don't know which changes will be successful, but there are likely to be radical changes in system architecture as companies start to design systems for much longer life.
I will make one specific prognostication: As new hardware architectures are introduced, more of the software will be pushed back onto ROMs, and OS kernels will act more like microcontroller libraries where you use system functions that on more expensive hardware is implemented in ROM, and on cheaper hardware the same code gets copied to RAM (as is the case for most of the system now)
Right now, and historically, audio/video subsystems sometimes have "hardware acceleration," certain things like floating point hardware are not always available, some optical scanners have to have their firmware copied from the host driver, some laptops require custom drivers because of custom hardware support/acceleration of various subsystems, and there is no general mechanism to manage any of this. Each subsystem might have a locally-standardized interface, but there is nothing general for those classes of situations. Or at least, to the extent that there is, it is only within the C compilers that it exists. I predict much greater convergence in how these different subsystems are defined and how they interact. Return of the Thick Client! Except the cheap version will be a compatible thin client. And each subsystem will be able to be implemented as hardware, local software, or cloud services.
Re: (Score:2)
Depends on what your definition of "need" is. For example, I could say I need to run Minecraft with 220 mods, at 30 FPS, with hundreds of machine blocks. (with an i7-7700k I usually get around 11-12)
Re: (Score:3)
Depends on what your definition of "need" is. For example, I could say I need to run Minecraft with 220 mods, at 30 FPS, with hundreds of machine blocks. (with an i7-7700k I usually get around 11-12)
The straight up implementation of the Multi MMC Predictor algorithm over 1,000,000 symbols takes 20 minutes to run on my laptop. That's a problem when you want to test every device in a production line. I'll happily take a faster CPU.
Re: (Score:2)
That's the whole point of the story; with improved software architecture you might be using an algorithm that scales horizontally, in which case there would be no need at all for a faster CPU, just more CPUs. Or even, more subsystems; maybe doing too much of the work in a CPU is the problem? In a lot of tools that I use, the hard parts are done in an FPGAs and embedded microntrollers and the CPU is just managing the user interface and system buses.
Re: (Score:2)
That's the whole point of the story; with improved software architecture you might be using an algorithm that scales horizontally, in which case there would be no need at all for a faster CPU, just more CPUs.
That isn't a new computer architecture, it's the same kind of distributed computing that I learned ~20 years ago, and it wasn't all that new back then, either. Maybe some new programming language could make it easier to implement some distributed algorithms, but current languages have the capabilities already.
Re: (Score:2)
Re: (Score:3)
The 500 Plus in 1995 was nothing to brag about.... it was still using a 68000 processor when other Motorola machines like Macs were on 68060s (at ten times the clock rate). Now if you had said "1985" then it would be impressive (the Amiga's initial release date).
I ran a modern web browser on a PowerMac G3 (1999).... slow as snails. Ditto on a Pentium 4 at 3000 megahertz. SD video is okay, but HD video runs like molasses.
Re: (Score:2)
Macs never used '060s. The last straight-up Moto chip used in a Mac was the '040, then it switched to PowerPC (which was still partly Moto, but you know what I mean).
Reminds me of a computer class in high school around the time of that switch, and the introduction of the Pentium, when our teacher was saying that Macs used 68020, '030, '040, '050... while PCs used 286, 386, 486, 586... and I had to interrupt and correct him on two counts.
Re: (Score:2)
1995 was actually after they stopped building Amigas. The 500 Plus was in 1991, and more of a last gasp, while the original 500 was in 1987.
Re: (Score:2)
You couldn't run a modern web browser on that, but that's because modern web browsers and web apps are just about the most bloated sacks of shit imaginable. 10 pounds of shit in a 5 pound sack with a tear in it.
But there's nothing inherently expensive in what a web browser actually accomplishes, other than playing compressed video. Flowing text to fit the screen in real time was impractical for 8-bit processors, but by the 16-but days it was fine. Javascript is horrifying, but there's nothing impossibly
Re: (Score:2)
And yet, it could run an old web browser that could easily display normal pages of data. It seems the only thing that it would actually choke on would be all the advertising and user tracking scripts. A few changes is CSS and things, but those have low overhead.
Re: (Score:2)
However that modern web browser does a lot more work because it's doing a lot more unimportant stuff as well. Browsers do a ton now of stuff that they should not be doing - Javascript and stuff sucking doing so much work that I could feel my computer slowing down until I killed the browser. Do browsers really need to be doing all this extra stuff? The Amiga was perfectly fine for an early web browser.
Consider word processing. The modern Word does not feel faster than the word processing I did on the Ami
Re: (Score:2)
Re: (Score:2)
Unfortunately, software developers keep introducing endless layers of abstraction so that shit still isn't running nearly as fast as it should be.
Re: (Score:2)
In the eighties we needed only 640kb of RAM, in the nineties a 1 Gb hard drive was more than enough space, in the new millennium HD was all the resolution we needed, and now we don't need faster chips?
Re: (Score:2)
Re: (Score:2)
Well, last week I left a job running overnight, and in the morning it still wasn't finished. It did eventually finish, so you can argue that I don't *need* a faster computer, but I'd sure like one.
Re: (Score:2)
Hey, I know. We should use asychhronous techniques! At both the circuit and the architecture level. (P.S. This is sarcasm, which students of Digital Logic and Computer Engineering may find amusing.)
The other half of the joke is that async I/O was the big new feature of a recent C# version, which means it will be the hot new thing in Java in another couple of years.
Re: (Score:3)
Hey, I know. We should use asychhronous techniques! At both the circuit and the architecture level. (P.S. This is sarcasm, which students of Digital Logic and Computer Engineering may find amusing.)
The other half of the joke is that async I/O was the big new feature of a recent C# version, which means it will be the hot new thing in Java in another couple of years.
Java NIO (Non-blocking I/O) was introduced in Java 4 (2002).
Re: (Score:2)
That will in no way prevent it from being the hot new thing in Java in a couple of years. Probably with all new libraries.
Re: (Score:2)
Re: (Score:2)
reddit
Plenty of porn then.
Re: (Score:2)
Boy, do I miss the old MC68000, where I could just look up how long something takes and could do actually fully synchronous coding for a significant performance boost.
Re: (Score:2)
Why miss what you can still have? Starting at under $33!
https://www.mouser.com/Product... [mouser.com]
But I'm not sure you'd have any advantage over a $5 ARM SoC.
Re: (Score:2)
Study the AS/400 architecture. It does things very differently.
Re:Moore's Law over? (Score:5, Insightful)
Because Moore's law is more about transistor density. It's an easy nit to pick.
It's still a big deal that we aren't getting any easy gains on single core speed, or, factoring in all their new fancy branch predictions, single thread performance. But newer CPUs are fitting more cores in, newer GPUs are wildly more effective (at a fundamentally parallel task). These are the arguments for Moore's law actually being still online.
Anyone who was around for the late 90s or before knows that computers simply aren't doing what they did before- completely obliterating previous generations of computing. A machine from 2008 can run most current games, and those it can't inherit their restrictions artificially (a motherboard that won't take a new enough GPU, for instance). It can certainly run the latest version of pretty much any OS, and many productivity programs. If you do that comparison from 1999 to 1989, it's a joke- a Pentium III at nearly a gigahertz compared to a 486 at like 50 or 66 megahertz. Look back again at 1979, and you are comparing to an 8086 or something.
Re:Moore's Law over? (Score:4)
As I posted above Moore's Law says "transistors will double every 2 years". That held true until 2015, when it slowed to 2.5 years (not a huge difference).
Re: (Score:2)
Some pedant always crawls out of the woodwork and starts talking about "transistor density".
You mean pedants like Gordon Moore who invented the law carrying his namesake? But I mean there's no need to take anyone's word for it. errr. Except this guy's: http://www.lithoguru.com/scien... [lithoguru.com] he's kind of a guru when it comes to his own law.
Re: (Score:2)
I've been saying that for 8 years. Of course whenever I say it here on Slashdot I get assailed by people who can't accept the truth.
Only because you a) don't understand Moore's law, or b) can't count to 7000000. Moore's law is alive and well and only slightly behind the trend line in the past two years. Even Intel is doubling the transistor count and changing gate size every 2.5 years now instead of every 2 years as Moore's law predicted.
8 years ago? Don't tell me you, a person who's nerdy enough to read news for nerds doesn't understand that the only part of Moore's law is counting transistors. I mean it's not like you confused it with
Re: (Score:2)
Gordon Moore's paper seems pretty clear on component count. https://drive.google.com/file/... [google.com]
Re:Idiot. We have enough stupid languages. RISC su (Score:5, Interesting)
Re: (Score:2)
Right anymore the CISC target is an abstraction, that lets the decode unit in the chip one last chance at optimization before sending data to the computational assemblies. 80% of the out-of-order speed advantage comes better static schedules on chip. And most of the circuitry on a modern chip is dedicated to figuring out what to do next and getting ready to do it, vs actual silicon dedicated to is. Something like a VLIW can get a lot more performance/watt by letting the compiler do the scheduling and set
I love the Mill (Score:3)
Re: (Score:3)
Trouble was, RISC solved a problem that was temporary - lack of space on the silicon.
There is the core of your misunderstanding. The idea of RISC was not to use fewer transistors, but to have a simpler, more orthogonal instruction set with homogenous stages all running in about the same time to enable high clock speeds and pipelining. And yes, there are excellent RISC designs out there - ARM is one, and so is RISC-V. "CISC" nowadays is 99% RISC - they copied the large register sets with x86_64, and they have broken down the CISC instructions into microops that are executed RISC style since
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
What twisted you knickers so badly?