Computer Scientists Develop 'Mathematical Jigsaw Puzzles' To Encrypt Software 245
another random user writes "The claim here is that the encrypted software can be executed, but not reverse-engineered. To quote from the article: 'UCLA computer science professor Amit Sahai and a team of researchers have designed a system to encrypt software so that it only allows someone to use a program as intended while preventing any deciphering of the code behind it. According to Sahai, previously developed techniques for obfuscation presented only a "speed bump," forcing an attacker to spend some effort, perhaps a few days, trying to reverse-engineer the software. The new system, he said, puts up an "iron wall," making it impossible for an adversary to reverse-engineer the software without solving mathematical problems that take hundreds of years to work out on today's computers — a game-change in the field of cryptography.'"
I Call BS (Score:5, Insightful)
I'm sure they can further obfuscate the actual code, but at the end of the day the processor is going to have to run machine code, and one way or the other you can tap the processor's activity to read the "decrypted" code. Beyond that, I imagine the performance penalties involved will be monstrous. Even normal obfuscation techniques have pretty heavy penalties.
Re: (Score:2)
TPM could put a stop to that by requiring auditing of voltages.
MS already requires this of PVP drivers for vista and will revoke them if you allow copyrighted content to leak.
Re:I Call BS (Score:5, Informative)
Which isn't to say you can't look at some variable in RAM, realize it is the boolean for 'authorized,' and then overwrite it. So it's essentially a complex obfuscation technique, which may or may not be useful. (That is my understanding of the article and abstract, if I am wrong please correct me).
Re:I Call BS (Score:5, Informative)
I know a fellow who worked on a system that would take IBM 360 machine code, convert it into graph format, and re-engineer it as C/C++ code. They made a killing on Y2K, because so many companies didn't have the source code for some of their software any more.
So yes, I call BS as well. Computationally complex, yes, as graph theory often is. But far from impossible to reverse engineer.
If it can execute, it can be reverse engineered. Only the most obtuse and purposefully warped hand-written assembler can even approach the ability to hide an algorithm from deep inspection, and even that can be largely overcome by applying graph theory to restructure the code graphs.
Re: (Score:2)
You can contact professor Thomas Dean at Queen's University in Kingston for details of how such graph theory applies -- he was involved with the development of this reverse engineering tool.
Re: (Score:2)
Re: (Score:2)
No, you have an unnamed function with unnamed parameters that takes a series of unknown numbers, and returns another unknown number.
The idea of obfuscation is that it makes the job hard to make those unknowns known, so that you can name the unnamed.
I didn't read the article, but it looks to me like at some point, the code is going to call some OS API functions, or call the UI library, or similar. And you can always start the naming from there. But of course if the WHOLE PROGRAM is obfuscated, so that the in
Re: (Score:2)
...then it may be impractical to figure it out. Not impossible though.
Everything in cryptography relies on the impracticality of brute-force attacks, which are never impossible. That's why we talk about security in terms of hundreds of years.
Re:I Call BS (Score:5, Informative)
You can make the algorithm as complex as you'd like, but at the end of the day, you have an input and output(s). You may decide to take a long time to get there, but at the end of the day, I know what you did when the code ran.
This is referred to as "black-box" reverse engineering in the paper. You know what the code did for one input. And if you injected every possible input to the program, at the end you'd have worked out a complete specifications for the function. But how long will that take? It's not always "the end of the day". For functions with a wide range of inputs, it could take the life of the universe to map them all unless you know what you're looking for in advance.
Right now, obfuscation approaches for software usually have some small chocking point to attack. It might be an encryption wrapper around the main binary. Bust that with a debugger, you can get to unobfuscated code for the main system, and then really start to work your way through the program.
But if you have to fight this every step of the way, where all you do is inject inputs and get an output to figure out what the program does, it will take you forever to reverse engineer things. That's the claim of the paper. The code itself is so obfuscated that you can't read it straight and understand it, ever. It looks like random junk. All you can do is run it with an input and see what comes back, and from that reverse engineer what it does. Assemble enough of those and you can see how the program really works. But that black box teardown process--determining possible inputs, injecting them, collecting outputs, and then deriving the function behavior--is time intensive enough that it may not be practical to actually do it anymore. You don't learn anything from reversing any component that speeds handling the next; you have to attack them all like this.
There's a great line from the seminal paper on this subjectOn the (Im)possibility of Obfuscating Programs [weizmann.ac.il]: "Any programmer knows that total unintelligibility is the natural state of computer programs (and one must work hard in order to keep a program from deteriorating into this state)" Extracting meaning from source code, being able to predict what some lines of code will do, it's hard. Ideally you'll just be able to read the code, make sense of it, and then reverse from there. Most systems that are thoroughly cracked have some sections where it's hard, but once you get those the remaining code is straightforward to read. And in fact, it's impossible to make something where you cannot reverse it. The question though is how long it will take.
If no sense is ever made of the code, you can only apply the "black-box" reverse engineering, where you inject inputs, watch outputs, and from there determine what the code does. You can't prevent that, but you can make the box so big that such work is impossible to do. That's what this technique tries to accomplish. You never find an easy part to the code you can read; all you ever find are ones where you have to map every input into an output to figure out what the code does.
Re: (Score:2)
Re: (Score:2)
Not really. I've seen memory encryption units that ensure that all data hitting memory is encrypted, and it's possible to have the startup code also
Just doesn't work... (Score:2)
Suppose you build something that throws in a lot of crazy random jmp calls to make this harder, and I cant be bothered to re-construct a program I want to steal. At some point a single Boolean decision hits the call stack
Re: (Score:3)
Re: (Score:2)
You're focused on a crack of one function in this sort of program. Given enough time, it is always possible to map all the inputs and outputs of a function, and therefore replace it with another that does the same thing--but with a change like a crack installed. The question, though, is how much time will it take?
There's not an easy to spot boolean on the line here; we're past when idiots built these things now. Let's say the output from the DRM checker is a 1024 bit key that unlocks the next function in
Re: (Score:3)
I implemented exactly this circa 1990 to protect a small database of disambiguation rules struc
Re: (Score:2)
One extra detail: the alphabet of 50 characters was the effective entropy over a much larger space of symbols. I described the tree in entropy space, because that is the what mattered to its performance profile. The naive view is that the symbol set contained 8000 symbols and that four character strings could be selected from a set of 8000^4 members.
I ignored this detail because conventional reverse engineering would very quickly determine that we only go to the hash table for a much smaller nucleus of t
Re: (Score:2)
Wow, beautiful reply. The bottom line message for me was that it was a one-off, in a sense security by obscurity (or, as you put it, by expensive obfuscation), and that to make it work you had to hand code it at a key point in a suitable application. And it doesn't make functional reverse engineering more difficult (seeing what the application does and trying to duplicate it functionally), it just makes disassembling the code into semantically meaningful functional units that can similarly work together t
Re: (Score:2)
"at the end of the day the processor is going to have to run machine code, and one way or the other you can tap the processor's activity to read the "decrypted" code"
Sure, and one way to reconstruct the program is to provide it every possible input and map the outputs. For that matter, one way to reconstruct the program is simply to load it up, see what it does, and code your own version that does the same thing.
But the question is how deeply you can inspect the algorithms based on what you see happening i
Re: (Score:2)
I do not think this the aim. The aim is to hide the "high level" algorith so that you cannot find it out. The obfuscation seems to be done by transforming the source, not binary.
This way if you have some "interesting" high level algorithm (to solve some real life problem) you can make reverse-engineering it not only a tad hard, but extremely difficult. For example, suppose I can solve NP in polynomial time. You can see from the virtual machine traces how one case of the problem is solved, but you cannot get
Re:I Call BS (Score:5, Insightful)
Unless the software can magically tlel that it's running in a debugger or a sandbox and refuse to execute, it's activity; from stack activity to system calls to memory allocations can be traced.
Re:I Call BS (Score:5, Informative)
When I was writing Windows device drivers, we so, really loved stupid games that detected a debugger and refused to run.
Fortunately most developers stopped doing that once they realised that their customers would have to wait for bug-fixes until they'd reported it to us and then we'd contacted the developers and they'd sent us a special version without the retarded code that stopped us debugging the problem.
Re: (Score:2)
I see no reason you can't use a VM to accomplish the same damned thing.
Re: (Score:2)
Cause the game detects it runs in a VM?
http://communities.vmware.com/thread/273480?start=0&tstart=0 [vmware.com]
Re:I Call BS (Score:5, Insightful)
That's because VMs really don't try to hide themselves from the guest. While it might be pretty hard to build a VM that did a good enough job to essentially fool software attempting to identify whether its physical or virtualized hardware that it's running on, we have the source for a number of VMs (ie. KVM and Xen) and if this kind of obfuscated software started showing up on the market, I'm sure there would be a much greater push to make a rock solid virtualized environment that mimicked physical hardware with much more fidelity.
Re: (Score:3)
You slice off the top of the chip and probe it. Sure it's expensive, but the people this level of security is intended for have the resources.
Re:I Call BS (Score:4, Insightful)
This exactly -- virus writers have been at the forefront of code that hides and obfuscates itself and VM type systems were developed to essentially run the code to determine its effects without actually running the code.
So long as the code can be executed, it can be reverse-engineered.
Re: (Score:2)
Re:I Call BS (Score:5, Informative)
Not exactly. Reverse engineering starts with the analysis of a running program with the goal of obtaining enough data about the locations of executable code vs data, major variable locations/identifiers etc to allow you to start running automated disassembly of the total code. All you can get out of a run-time analysis is a specific execution path, which does not embody the full code.
Now yes, "if your CPU can read it, then you can read it", but unfortunately in this situation, your CPU can only read it in certain circumstances, so you'll only be able to read it in those same circumtances: execution or simulated execution, leading back to the situation where you're stuck looking at traces of specific executions...
Re: (Score:2)
Re: (Score:2)
Re:I Call BS (Score:5, Interesting)
Jigsawing is not encryption, it's splitting things up into shared object libraries (.dll's) that the linker then has to "reassemble" as the program is loaded into memory where it is executable ie the program counter can be pointed at it and let run. No the best obsfuscation I have yet seen is X86 assembler and even that doesn't run without a ton of overhead but... disassemblers exist for it. /. is right to scoff A) the write up is contradictory B) the article makes no sense because in order to run the code the processor has to solve the puzzle so you just stick it in an emulator and hit record. C) this looks like a school and or corporate overlords hyping an unpublished paper which many of us recognize as one way peer reviewers are pressured into signing off on snake oil D) who claims that this is the first attempt at obsfuscating code? This thing: http://www.ioccc.org/ is on it's 22nd year.
misunderstanding (Score:3, Interesting)
either you misunderstand or i do, but my understanding was that this system allows a function to be expressed so that the code will only execute with certain parameters, hence the function can only compute certain evaluations (lets say one for simplicity) of the original function, and the true function can be hidden. so that the function evaluated for other parameters cannot be computed with this expression.
basically a really complex way of designing a look up table ?
now as you say, you can never hide the v
Re: (Score:2)
So long as we're able to see the instructions being given to the processor (hint: we pretty much always can), you can reverse engineer the code. Performance penalties or not, the entire thing is pointless if it doesn't achieve what it's set out to do, and you don't have to be an expert in this area to see the hole that's big enough to steer a panamax-class cargo carrier through. Despite that, the problem went wholly unaddressed in the summary and I'm not seeing any comments here indicating otherwise, which
Re: (Score:3)
OK as an example what if you have worked out a way to transform an arbitrary program in to a huge bunch of "if else", goto statements with "magic numbers" or worse (e.g.
setaddress(mod(magic1+sha256memoizer(magic2+parm1+parm2+...),dataend))=parm3;
linenumber=mod(magic3+sha256memoizer(magic4+parm5+parm6+...),maxlines);
goto linenumber;
) and some stores and loads so that it's still equivalent and does the same thing but just a bit slower. So when you disassemble it - it's still a big mess that makes it hard to
Re: (Score:2)
So long as we're able to see the instructions being given to the processor (hint: we pretty much always can), you can reverse engineer the code.
True. As long as you are able to follow every possible path of execution of the code. Which puts us in the realms of "theoretically possible, but unfeasible in practice".
How do we normally reverse engineer code? We run the code through a debugger/logger to identify contiguous areas of code and data, and to highlight common destination points for branch conditions and data addresses frequently read from/written to. That information is used to inform the automated stages of disassembly. Probablistic models
Re: (Score:2)
You're claiming to be able to reverse engineer an entire cookbook just by chemically analysing a cookie from one recipe in that book. You'd have to make every single recipe in the book and analyse it before you had a complete representation of all the recipes.
Re: (Score:2)
No, what you're describing is looking at a finished product and inferring the process of how it was created. That's very difficult.
What GP is referring to is watching someone follow the recipe (without seeing the recipe itself), and then inferring the recipe. Much easier.
Re: (Score:2)
yeah well release a working proof and make 5 billions.
I'm not shitting you, if you can really do even a simple proof of concept that does that then you could bag a lot of money next week.
I can accept that you could do it this system so that it will spit out right answers for right predetermined inputs but nothing else, and even then you could see it do it's thing with that input while it might not provide you the outputs for other inputs, but that's pointless as a program for most any uses a computer progra
Re: (Score:2)
Re: (Score:3)
I call BS on the notion that my CPU is smarter than me. Article claims that it takes "hundreds of years" to break this. Obviously then it must take the program "hundreds of years" to run. Otherwise the CPU is using a short-cut. All I have to do is figure out what the short cut is (hint: figure out what the CPU is doing, where it got its instructions from) and it's cracked. If your computer can run it, you can figure it out. It's that simple.
By this argument, PGP, RSA et al must be worthless too, because if it takes hundreds of years to break a 256-bit cypher it must take hundreds of years to run a 256-bit cypher. You know, if your computer can run it, you can figure it out.
I hope you can see the flaw in your argument now.
Re: (Score:2)
By GP's argument, for the CPU to decrypt the message so quickly, it must be using a shortcut. In your example, the shortcut is that the CPU knows the secret key.
Do you honestly think that someone watching the CPU decrypt the message can't access the message themselves? http://en.wikipedia.org/wiki/DeCSS [wikipedia.org]
Re: (Score:2)
Re: (Score:3)
Re: (Score:3)
Yeah, but once they add the GUI interface using VB, it'll be uncrackable!
Re: (Score:2)
Re: (Score:3)
Even better than the grocery list is Searle's Chinese Room. A man sits in a sealed room. He doesn't speak or read chinese, but every now and then somebody pokes a slip of paper in under the door with some Chinese on it. The man picks it up and goes to a vast filing cabinet, matches the symbols on the paper, and pulls out a paper with other symbols on it which he pushes out the door. In this way the entire room can act like a "speaker of Chinese" even though the man in the room doesn't recognize a word o
Re: (Score:2)
In other words it's going to do malloc() a big pile of memory, then throw the actual code all over the place, linked together with a helluva lot of jmps and indexes to keep track of them. Yes, I'm sure if every second or third clock cycle is spent leaping over the process's malloced space to fetch the next instruction or the next byte from the symbol table, it may be too complex to actually decompile to reproduce a reasonable representation of the original source code. But as others have pointed out, that l
Deciphering != Reverse Engineering (Score:4, Informative)
Deciphering/Decrypting is not the same thing as Reverse Engineering.
Reverse Engineering is duplicating the functionality without seeing the source code. That should still be possible if you have the ability to run the program.
Re:Deciphering != Reverse Engineering (Score:5, Insightful)
One way around this (for reverse engineering) would simply be to run it inside a VM with a built in stack trace debugger, like Bochs.
You can peek at the raw instructions percolating through the VM's emulated CPU that way. The application itself is then the decryption key, since the system has to be able to run the application.
PITA, but I don't see how this is anything more than just a "bigger" speedbump, at least as far as disassembly goes. Now, making said decryption require nasty bits of hardware to thwart using a VM, and clinging tightly to treacherous computing with a TPM and a hardware hypervisor on top of this? That's more dicey.
The question here is... why do this? Is preventing Cracker Jack from the warez scene from simply disabling your "authenticity checks" so horrifically important? That's the major application for this that I really see, besides making epically horrific malware. (Though if you ask me, they are both the same thing.)
Seriously, other than making personal computing become something from communist russia, what is the benefit of this?
Re: (Score:2)
Yes, the only way this works is with hardware to back it up. But if we're back to effectively using some sort of dongle to execute, well that road has been well-traveled as well.
Re: (Score:2)
I was thinking more along the lines of "special processor instructions", that make use of quirks of real silicon. Given how intel and amd both have taken to cramming lots of extra devices onto the die, a small "inside the CPU" black box designed for this very application would do the trick just fine, and would likewise ensure its presence on modern rigs.
A virtual machine halting execution would be detected by the running software, because it wouldn't get the proper responses from said black box. You'd have
Re: (Score:2)
Yup, and if you can run the code, no matter how obfuscated. in a controlled environment like a virtual machine, no matter how complicated it may be, its interaction with the virtualized hardware is going to be observable. I guess you could start writing software that sniffs out that it's in a VM, either by looking for paravirtualization or simply for looking for some subset of bugs or limits in the virtualized hardware that one wouldn't find on actual physical hardware, but that's just the better mouse trap
Re: (Score:2)
Hardware that can't be emulated you mean.
Even high grade encryption hardware that refuses to allow debugging can be emulated, if slowly, once you have keys.
Re: (Score:2)
Seriously, other than making personal computing become something from communist russia, what is the benefit of this?
Wait, isn't that considered to be feature enough?
Re: (Score:2)
Seriously, other than making personal computing become something from communist russia, what is the benefit of this?
Security. One major benefit of obfuscation is making it much more difficult to find local data store encryption keys, service endpoints, etc. It makes it harder to find bugs/exploits such as SQL injection.
Remember that not all attacks are aimed at the software in general. Many, many attacks on medical/banking/government systems are aimed at finding specific data on specific computers, and the attacker isn't running it on a VM. These attacks rely on perhaps a trojan or backdoor. The harder it is to buil
I don't Think That's The Point (Score:2)
You don't need to emulate the hardware to see a program's output, you can just look at your screen. There is little point in hiding the output of a program from the user who's running it so I don't see that as the point (if one cannot observe the output of a program, it is of little utility).
Which instructions a given program executes depends on the inputs to said program. For any given input, most programs only execute a tiny portion of their code. Therefore, in order to completely reverse engineer a pr
Re: (Score:2)
Agreed, for each nested brach, complexity increases at LEAST geometrically. (A branch always has at least 2 paths.) However, one may not NEED to know *all* branches.
Say for instance, the behavior you want to modify (we will assume you are a cracker making a memory poking drm defeat patch) is rigidly defined in where it gets called from (by this, I mean what code branches spawn the check). You don't really need to look at the other execution paths, just the ones that trigger the drm check, or better still, t
Re:Deciphering != Reverse Engineering (Score:4, Informative)
Yes, it is robust. I read the paper a few days ago.
All these comments about how you can "just look at the CPU instructions" are made by people who haven't been following developments in the field. The program never gets decrypted into CPU instructions. Heck, it was never even compiled into CPU instructions in the first place. It gets compiled into a form of boolean circuit, a mathematical equivalent of an electronic circuit that is composed of AND, NOT, OR, XOR gates and wires between them. Then that circuit is itself again transformed into a series of matrices and at that point I hit the limit of what I could understand without needing to read some of the cited papers.
This is a very, very complicated technique that builds upon decades of cryptographic research. If they say it's secure in the cryptographic sense, I think it's very likely to be so.
Re: (Score:2)
The thing is that "secure in the cryptographic sense" means "provably secure against a defined threat model", and the threat model may not be relevant to real world applications. My understanding of their proof of security is that the threat model is black-box evaluation, which doesn't seem very relevant to real obfuscation.
Re: (Score:2)
Re: (Score:2)
Gotta love articles without details (Score:2, Funny)
Clearly the journalist had no idea what they were writing about.
Re: (Score:2)
You mean a link to the paper [iacr.org], like the one in the third paragraph of the article?
Re: (Score:2)
When you get to that, then it's the GP turn to have no idea what the article is writing about.
I took a glance at it. Probably don't really understand it unless I spend a week or two seriously reading it thoroughly and its cited articles...
Victory for virus writers (Score:4, Insightful)
Who gains from this? Most of all? Virus writers.
Perhaps the real solution is to have the OS never execute code looking anything like this.
Re: (Score:3)
Heh. No fucking shit. It would be a malware writer's holy grail; software that has no identifiable signature.
I still think it's BS, at least on any kind of processor or operating system we're running now.
Re: (Score:2)
Heh. No fucking shit. It would be a malware writer's holy grail; software that has no identifiable signature.
I don't think anti-virus software needs to understand, reverse-engineer or decipher a virus program in order to identify its signature. That's not how it works. That's why there is false positives with programs that do not do what the anti-virus software thinks it does.
Re: (Score:2)
As long as the code is always identical, you're right. It is likely, though, that this machine is able to produce vastly different results from minuscle changes in code, resulting in a very easy way to generate an arbitrary amount of copies of your virus code that doesn't look anything like the others to a scanner.
Re: (Score:3)
That said it would be safer if OS makers solved it by better sandboxing and better sandboxing infra/UI. Sandboxing is like "solving" the halting problem by setting a time limit.
In fact you're still in
Hurry up, Europe is hungry for your fines (Score:5, Interesting)
(14) A person having a right to use a computer program should not be prevented from performing acts necessary to observe, study or test the functioning of the program, provided that those acts do not infringe the copyright in the program.
(15) [...]Nevertheless, circumstances may exist when such a reproduction of the code and translation of its form are indispensable to obtain the necessary information to achieve the interoperability of an independently created program with other programs.
It has therefore to be considered that, in these limited circumstances only, performance of the acts of reproduction and translation by or on behalf of a person having a right to use a copy of the program is legitimate and compatible with fair practice and must therefore be deemed not to require the authorisation of the rightholder. An objective of this exception is to make it possible to connect all components of a computer system, including those of different manufacturers, so that they can work together. [...].
make a crackme (Score:3, Insightful)
If they really think it is so good, then they should put their money where their mouth is.
Make it into a crackme, issue a large award for solving it.
Post it online. I give it a few weeks max, if that.
And who is to say it can't still be manipulated once running?
Think of the performance cost.
Either way, I have no faith in an article with little details.
Re: (Score:2)
Well, DUH!
Be honest. Imagine you have something like that. Where would you present it?
At a tech conference, where techs are present who can't throw money at you but can crack it?
Or at a management conference where the second part of the above statement is inverted?
Performance hit (Score:2)
and programs will continue to run like it's 1987
A giant step backward for maintainability? (Score:2)
The program will have to DO something (Score:3)
Call the kernel to access files, sockets, etc.
Also unless the developer is super 31337 and likes to write everything I expect shares library calls too.
By watching calls to those interfaces we can figure out what it does.
Re: (Score:2)
Even if all the libraries are statically compiled, it's still going to be making system calls. Yes, it's behavior could be very hard to determine, but unless it's a standalone operating system that runs entirely on its own, it's going to be traceable. Even then, one could still run it in a VM. Perhaps harder to get useful information out of, but still...
Re: (Score:2)
Reverse engineering something like that will be hard if its obfuscated.
If you can't reverse engineer it you can't make a compatible version without breaking copyright by copying the hash module as is.
The original paper (Score:5, Informative)
Cryptology ePrint Archive: Report 2013/451
Candidate Indistinguishability Obfuscation and Functional Encryption for all circuits
Sanjam Garg and Craig Gentry and Shai Halevi and Mariana Raykova and Amit Sahai and Brent Waters
Abstract: In this work, we study indistinguishability obfuscation and functional encryption for general circuits:
Indistinguishability obfuscation requires that given any two equivalent circuits C_0 and C_1 of similar size, the obfuscations of C_0 and C_1 should be computationally indistinguishable.
In functional encryption, ciphertexts encrypt inputs x and keys are issued for circuits C. Using the key SK_C to decrypt a ciphertext CT_x = Enc(x), yields the value C(x) but does not reveal anything else about x. Furthermore, no collusion of secret key holders should be able to learn anything more than the union of what they can each learn individually.
We give constructions for indistinguishability obfuscation and functional encryption that supports all polynomial-size circuits. We accomplish this goal in three steps:
- We describe a candidate construction for indistinguishability obfuscation for NC1 circuits. The security of this construction is based on a new algebraic hardness assumption. The candidate and assumption use a simplified variant of multilinear maps, which we call Multilinear Jigsaw Puzzles.
- We show how to use indistinguishability obfuscation for NC1 together with Fully Homomorphic Encryption (with decryption in NC1) to achieve indistinguishability obfuscation for all circuits.
- Finally, we show how to use indistinguishability obfuscation for circuits, public-key encryption, and non-interactive zero knowledge to achieve functional encryption for all circuits. The functional encryption scheme we construct also enjoys succinct ciphertexts, which enables several other applications.
Category / Keywords: public-key cryptography / Obfuscation, Functional Encryption, Multilinear Maps
Date: received 20 Jul 2013, last revised 21 Jul 2013
Contact author: amitsahai at gmail com
Available format(s): PDF [iacr.org] | BibTeX Citation [iacr.org]
Re: (Score:2)
I already read the paper some days ago when it was first uploaded to the IACR pre-print archives. Yes, the paper is the one being referred to. It's a very interesting result, although not really impactful at the moment for things like game DRM.
The confusion arises from terminology. The technique applies (presently) to pure functions. You can write those functions in, for example, a subset of C because there exist compilers that transform such programs into boolean circuits, and circuit form is what they obf
Wish I had thought of this myself (Score:2)
Re: (Score:2)
This reply is typical of a growing percentage of replies on /. : abuse & insults, under the cloak of Cowardness and Anonymity. Well done, I am impressed by how you immediately detected my grammatical error.
I regret to inform you, though, that the correct word for what you speculate me to be is dumbfuck
Re: (Score:2)
1. to glance at or over or read hastily: to scan a page.
And even if you had been correct, it was an unnecessarily harsh comment, but do please feel free to hide behind the mantle of the Anonymous Coward.
like DVDs and all the others (Score:2)
Though I haven't studied this particular one, my general impression is that it was not designed by cryptography experts and then vetted the way well known cryptographic algorithms are vetted by other experts.
Re: (Score:2)
Re: (Score:2)
Actually the encryption itself on these schemes was not broken. Rather, player emulators became good enough that the industry could not revoke hacked players fast enough to keep up.
This actually has a name. (Score:2, Flamebait)
It's called Forth [wikipedia.org]
What this means. I think. (Score:5, Interesting)
This is fascinating, but hard to understand. It's not clear how broad a result this is.
This seems to apply to "circuits", which are loop-free arrangements of AND, OR and NOT gates. These take in some bit string with a fixed number of bits and emit another bit string with a fixed (but possibly different) number of bits. Many computations can be encoded into this form, but the "circuits" are typically very deep, since this is sort of like loop unwinding. Although this seems kind of limited, you could, for example, make a reasonably sized "circuit" to compute DES, which is a static pattern of Boolean operations.
"Obfuscation" here means taking in some "circuit" and producing a bit for bit equivalent "circuit" from which little information can be extracted. The obfuscated circuit may be (will be?) more complex than the original, because additional logic has been inserted in a somewhat random way.
The contribution in this paper seems to be that this might be useful for "functional encryption". This is where you have some values, such as A and B, which are encrypted and are to be protected, but the result of some function f(A,B) is not protected. The idea is to have an implementation of f which combines the decryption and the desired function, an implementation which cannot be reverse engineered to return decrypted versions of A or B. While this has been suggested as a possible feature for databases, it's hard to apply to useful functions, and so far, it's mostly a research idea there. It's been suggested for some complex access control schemes involving mutual mistrust, but that too is a research topic.
Anyway, this doesn't mean you're going to be able to run your big executable program through some kind of magic obfuscator that will prevent someone from figuring out how it works. Not yet, anyway.
Other aspects of the paper - health data (Score:4, Interesting)
I can't really comment on the slashdot summary, but take a look at the actual abstract: http://eprint.iacr.org/2013/451 [iacr.org]
"In functional encryption, ciphertexts encrypt inputs x and keys are issued for circuits C. Using the key SK_C to decrypt a ciphertext CT_x = Enc(x), yields the value C(x) but does not reveal anything else about x. Furthermore, no collusion of secret key holders should be able to learn anything more than the union of what they can each learn individually."
In other words, it seems that their technique allows you to encrypt some secret data, then (provably) only release the result of some arbitrary function of that data. It sounds like this means you could (in principle) release health data in encrypted form, then allow researchers to study some ensemble properties of it by giving out the appropriate keys. This aspect of it certainly seems pretty cool.
Nice try (Score:2)
But UCLA doesn't get to claim the credit. MIT was first to present homomorphic encryption: http://web.mit.edu/newsoffice/2013/algorithm-solves-homomorphic-encryption-problem-0610.html [mit.edu]
Roughly speaking, homomorphism is a map which preservers the properties pertinent to a category. Now think of data as acting on code instead of code acting on data. Since "acting" is a mapping, acting in a homomorphic way would produce program results which are equivalent but without the decrypting step.
Malware in 3, 2, 1... (Score:3)
Remember, kids, everything that can be applied for good can be applied for ill. And code that is impossible to decipher and analyze is the holy grail of malware.
Encrypting the algorithm, not the code? (Score:3)
Poor RTFA (Score:2)
RTFA is shallow on details, but it sounds very much like the "phantom" self-mutating MS-DOS viruses from the later 90. (The ones which forced anti-virus vendors to effectively implement a debugger/a interpreter for the code to detect the malicious code not by the look but by the actual work it does.)
Otherwise, the main problems with such intrusive obfuscations is that code is never 100% bug free and bugs sometimes trigger undefined/unspecified behaviors. It might work before obfuscation - but would break
This is poorly described, but is a breakthrough (Score:2)
Normally obfuscation is bad in cryptography - it means that the system is theoretically broken, but that the way to break it is quite well hidden.
This refers to cryptographically secure obfuscation. This is an entirely new field, and hasn't been possible till now. This paper doesn't prove how to do it, but proves that it is possible for a certain subset of operations.
Basicly it boils down to the fact it is possible to make a computer program that, for a given set of inputs a) generates a set of outputs b)
Re:Seems improbable (Score:4, Insightful)
Yup. At the end of the day, if this is at all useful and the hardware and OSs out there now, it;s still going to have to execute, and if it executes, you can run it through a debugger and watch it.
Re: (Score:2)
Not on a protected OS you can't.
I imagine that if this is implemented it will be patented much like Blu-ray, and the only way to get a license is to swallow the DRM.
Re: (Score:2)
Some VMs have "save state" features.
Let's say we have a program that branches many times, as you describe above. The VM is controlled with a script. On the first pass, it looks for and records all instruction branches on the first program run along with the data compared against for each branch to execute, then reloads the saved state at initial program load, and begins systematically walking and triggering code.
On each pass, it spits out what the CPU did, with what inputs.
Since it is a given that new path
Re: (Score:2)
If crappy performance is no obstacle, I'm sure you could produce compiled code that is insanely obfuscated. Still, as I said elsewhere, if you can execute it, it can be executed in a debugger and you can watch what it does, and even if it's doing mad long jumps all over its allocated memory, you're going to be able to trace execution. It will betray its functionality.
Re: (Score:2)
As if that has ever been a concern of software makers. "Our Software runs too slowly? Well, get a better machine, your ancient 1 year old crate of course cannot run our superspecialawesome piece of artificially deoptimized code".
Re: (Score:2)
Re: (Score:2)