MATLAB Can't Manipulate 64-Bit Integers 334
An anonymous reader writes "MATLAB, an important package of mathematical software heavily used in industry and academia, has had support for 64-bit machines for several years now. However, the MATLAB developers still haven't gotten around to implementing even basic arithmetic operations for 64-bit integers. Attempting to add, divide, subtract, or multiply two 64-bit integers will result in an error message saying that the corresponding method does not exist. As one commentator put it, 'What is the point of having numerical data types that can't be manipulated?'" The post notes that the free MATLAB clone GNU Octave deals with 64-bit integers just fine.
So... (Score:5, Insightful)
How is this news?
Answer (Score:3, Insightful)
What is the point of having numerical data types that can't be manipulated?
So they can charge more for the upgrade?
Re:It's not that big of deal (Score:4, Insightful)
Re:It's not that big of deal (Score:3, Insightful)
Yes it is. People who do the kind of hardcore math that MATLAB is good at are the ones who actually need 64 bit computing.
Surprisingly, not all that often. People who work with very sensitive systems (chaotic one in particular) or VERY precise data need 64 bit precision, but for 98% of everyone else, it's just not necessary. Anyone doing really advanced work is going to use a supercomputer, for obvious reasons.
MATLAB's largest audience is engineers, although applied mathematicians and physicists use it often, just not nearly in equal numbers with engineers (who also outnumber the others greatly). Given that engineers work with real data, which never has more than 6 digits of accuracy anyway (3 is more realistic), the push for higher precision just isn't there.
It's one reason MATLAB doesn't have the greatest 64-bit support: there's no real demand for it, yet. The few who need it can just as easily work in C++, since all MATLAB really is is a collection of routines with a nice interface, much easier plotting/graphic commands, and a nice set of help files.
For reference, I've had 64 bit computing readily available to me for...my entire career, and I've never once NEEDED it, despite being an applied mathematician.
Re:Python's SciPy and NumPy FTW (Score:3, Insightful)
SciPy/NumPy, R, and Octave are all perfectly good alternatives to MATLAB these days for most work. But there are a lot of people who rely on MATLAB-specific toolboxes. I look forward to the day when proprietary math and stats packages take their place in the bitbucket of computing history, but we're not quite there yet.
A Heavy User's Opinion (Score:5, Insightful)
Re:It's not that big of deal (Score:5, Insightful)
Re:It's not that big of deal (Score:3, Insightful)
numerical computations that are highly optimized for speed on computers do not always allow for variable sized numbers. The more you assume about a problem, the faster you can make the algorithm to solve it. I'm betting that there are many optimized numerical algorithms in matlab that use underlying knowledge of the data structure itself to solve it. It is a trade off speed vs scalability/generality.
Re:It's not that big of deal (Score:3, Insightful)
This is ridiculous, the first thing any self respecting physicist will do is change the units of the problem.
Not that big a deal (Score:4, Insightful)
Re:It's not that big of deal (Score:3, Insightful)
1. MATLAB doesn't support 64 bit ints
2. People still us MATLAB.
Thus, there is a group of people "who do the kind of hardcore math that MATLAB is good at" who don't need 64 bit computing.
Re:It's not that big of deal (Score:3, Insightful)
Frankly, even equally worrisome is that Matlab doesn't appear to take advantage of GPGPU yet. The concept has been around for over half a decade, and I'd have expected the MAtrix LABoratory to jump on the bandwagon quicker than most. It's a game changer in their core competency, after all.
I haven't looked at MATLAB+GPGPU recently, but back in the olden days before CUDA and OpenCL there were a handful of 3rd party matlab extensions that made use of GPGPU. Nothing official, but still plenty functional in their limited areas of implementation. The company's laziness with respect to GPGPU is no surprise (see my other rant in this story's discussion) and the fact that others have put together limited GPU-based extensions has probably further reduced the pressure for them to do anything in that area.
Re:Especially since someone has implemented it.... (Score:5, Insightful)
Re:MATLAB ~= fast (Score:3, Insightful)
Only if it can be vectorized. Otherwise, MATLAB is *bog slow*. Compiling as a MEX function will restore the speed.
Re:The R program can't do 64 either (Score:1, Insightful)
And chip designers. And number theory specialists (not all of them, oh the shock, are cryptographers). In fact, anyone that uses a number as an abstraction and not a physically interpreted quantity.
Re:A Heavy User's Opinion (Score:2, Insightful)
Re:Yes but Octave (Score:5, Insightful)
For a program like octave, having no GUI is very forgiveable. There is really no way to work with the system outside of prompt commands. Even Matlab is very prompt based.
What is unforgivable in Octave's case is its graphing capabilities. Octave used Gnuplot for drawing which basically means it is stuck in the 1990s when it comes to making plots. 3D plots are slow, difficult and complicated things to create. Animations are out of the question. 99% of the time, you're better off exporting to png (itself a nighmare), and animating from those. 3D data is all but ungraphable on Linux systems anyway, so I suppose Octave is not alone here.
Re:It's not that big of deal (Score:2, Insightful)
Re:It's not that big of deal (Score:5, Insightful)
And as stated before, this matters little as it is a performance issue, and matlab still offers the best performance of its class, even vs. those who do have this feature.
Absurd limit theory (Score:2, Insightful)
It's best to design software for limits that are frankly absurd. Since I coined the phrase "absurd limit theory", let's delve a little bit into it.
A second divided into increments enumerated by fractions of a 64 bit integer is less than the differential travel of differing wavelengths of light over a planck unit of distance. It's a smaller unit of time than matters to our current understanding. 2^64 seconds is more time than the entire history of our Universe from beginning to end even in the most ridiculous theory. Even counting photons and every known subatomic particle, an index of 2^64 is sufficient to enumerate each and every one with a unique ID that represents that one and no other. In each dimension angles discriminated in units of 1/2^64 of a circle are fine enough for any imaginable purpose.
Given that we are close to understanding these elements, 128 bits would probably be a more durable unit. 128 bits should give us a few more years of discrimination before the physical sciences discover even finer units of time, mass and distance. Some even theorize now that 64 bits are the limit: that space, mass and time have a granularity and that finer resolutions are absurd. To that I say...
If software must be designed with limits it should use limits so absurd that no user no matter how hypothetical could be constrained by those limits because the designer does not know how long his application will persist nor how its use will evolve.
It's not the most efficient design that wins. Ultimately it's the most persistent.
Re:Python's SciPy and NumPy FTW (Score:2, Insightful)
In my personal opinion, I feel that Matlab was too awkward of a language when it came to do anything else besides math. For example, it made it difficult to use when one needed to do string manipulations to figure out what files to load. Meanwhile, Python is a full-fledged language that would allow me to do many supporting tasks for my math related work.
Re:Commercial software lags... (Score:1, Insightful)
For some reason commercial software usually seems to lag worst on the 64 bit transition. Windows and OSX lagged Linux, Java and Flash were the last bits on my Linux systems to go 64 bit, etc. They act as if 64 bit is a fad, and people will soon come to their senses and revert to 32.
*Solaris and DEC users shake their heads*
Re:Python's SciPy and NumPy FTW (Score:2, Insightful)
Just because they use the same BLAS/ATLAS backend doesn't mean that they'll perform to the same speed - BLAS calls dont have to be done at the same efficiency, or certain common operations which are a chain of calls be implemented in the same manner. A lot of good functions are prewritten for Matlab. It is like saying if I put a Ferrari engine in a Chevy Suburban, it will perform as well as if i put it in a Ferrari.
Re:So... (Score:3, Insightful)
Re:It's not that big of deal (Score:2, Insightful)
"If you really need your results to add up exactly, especially when you work with money: use a special decimal datatype"
Is is true that the GP mentions small numbers and fp is in fact more precise with small numbers, but when the numbers get larger the difference between two representable fp numbers gets bigger also.
Re:It's not that big of deal (Score:3, Insightful)
Re:It's not that big of deal (Score:4, Insightful)
Hmm, a double has 64 bits, where do the last 12 go?
A 64 bit integer will lose precision when converted to a 64 bit double because the double has to use some of the bits for the exponent and the rest go the mantissa. If 12 bits go to the exponent, then your 64 bit integer now has to be expressed as a 52 bit integer, raised to some power. There are (I think) 2^11 distinct numbers that would have the same representation as a double.
Re:Yes but Octave (Score:3, Insightful)
You should check out some of the newer versions of Octave. The graphics in versions 3.0 and 3.2 are very much improved. (Disclosure: I was an active Octave developer a couple of years ago.)
Re:Yes but Octave (Score:3, Insightful)
Re:Especially since someone has implemented it.... (Score:5, Insightful)
using 64-bit integers instead of floats is a common trick in embedded C for control and signal processing on low power processors. I have experience of four different embedded systems used in commercial products from three different companies I've worked with - three of the four used 64-bit integers for roundoff-sensitive calculations.
I was a bit surprised that Matlab can't handle this, but then I've seen the poor quality of the ostensibly production-ready code that comes out of their M2C converter - it was about ten times the code footprint and a fifth the speed of a minimally-optimised C version of the same algorithm.
Honestly, I don't know how anyone can justify paying for this, when R (and even Octave in this instance) is more capable. Where the target platform requires C or asm code, then doing development in Matlab is usually more trouble than it saves. The graphs are prettier, though.
Re:Especially since someone has implemented it.... (Score:4, Insightful)
Re:Exactly, 64 bits is so over rated (Score:4, Insightful)
CPU operations are limited to a certain number of bits for their operations. Programming languages like C/C++ perform their basic arithmetic operations at the machine level, so they inherit the same limitations. These bounds are not a limit either through library/template facilities at the C/C++ level, or with basic operations in high level (particular object oriented) languages such as Pike and Python.
I can tell you libgmp is not stuck with bcd. But the bcd aspect will exist because some kinds of uses for extended precision are financial/money based, and conversion to and from an external decimal format is sufficiently frequent that it's easier/faster to just do the arithmetic directly on decimal, even if tightly squeezed into 4 bits per digit. This has been going on since early computers. FYI, the ancient IBM model 1620 [wikipedia.org] computer could do this in hardware. As you can see from the code in the links I posted earlier, a choice of language can hide the fact that the underlying architecture has fixed width arithmetic.
BTW, for fun, compare the speeds of those two programs, which are implementations of the same algorithm.
Re:Especially since someone has implemented it.... (Score:2, Insightful)
Oddly enough, he's right. Integer operations *are* way faster on most CPUs. Maybe not on a modern x86 CPU working in SSE space, but they're not "most CPU's".
If you're writing code for a fixed-point DSP chip (used in most home receivers) the majority of RISC processors (ARM, MIPS, most Power) or any microcontroller, working in integer space will be much faster since floating point has to be emulated or issued to a separate floating point unit which requires extra overhead to move data to/from it.
And the same engineers who target code to those systems also tend to use Matlab. Many systems (control systems, audio/video codecs, whatnot) are simulated in Matlab before they're even written for their target processor, where you can do things like verify the algorithm up front, then specify an arbitrary number of bits at different stages to find out how much error it introduces so you can pick your data types in your target.
Matlab with 32-bit integers (or 64 bit floats + rounding, for 53 bits) is good enough for simulating most 16 bit DSPs (40 bit accumulator, usually) but not quite enough for a 24 bit DSP (56 bit accumulator, usually). Having pure 64 bit integer support means you're good for 24 bitters.
Now, knowing Matlab, I wouldn't be surprised if they emulate integers somehow using floating point ;)
Re:Exactly, 64 bits is so over rated (Score:1, Insightful)
but at thesame time why don't we have a math lib that supports it all? rounding support for financial institutes, fast math until it overflow the 32 bits, double where precision is not defined and arbitrary precision when you tell it to.