Aiming to become the global leader in chip-scale photonic solutions by deploying Optical Interposer technology to enable the seamless integration of electronics and photonics for a broad range of vertical market applications

Free
Message: Programming Code

it may or may not require refactoring code, a recompile, something else, or nothing at all.

one would think it depends entirely upon the implementation.

no-one outside of NDA can claim to know.

but, we can speculate, and infer.

AMD has openly stated its goal to get a platform 25x more efficient than the current offerings within the next 5 years.

so, they might take their compute platform and implement it using PET/POET CMOS, instead of traditional silicon CMOS, and it could work 1:1, albeit with the advantages of Taylor's genius; they might do similar with their GPU.

sometimes an emulation layer is used.

for example, when AAPL switched from Motorola to Intel, it was mostly a recompile of the operating system, but for legacy OS9 programs, a feature called Rosetta was available, so that the 68xxx application code would work on x86.

and of course some re-coding can be done to optimize for newly-available features.

for example (AAPL again) their 'Snow Leopard' release of OSX was more-or-less just a massive optimization effort to make everything in the operating system 64-bit native, tuned for performance.

with datacenter and enterprise adopting platform virtualization will the difference matter to a generic applications stack, as long as hypervisor abstraction (vmware, hyper-v, kvm, etc.) is supported well?

outside of ultra-high-end (or special-purpose IBM/POWR, etc.) it's clear that x86 has been dominant, however, recently, we are seeing the reverse (ARMv7 hardware running x86 code).

where things move from generic to specific, where there is likely to be enough leverage to warrant running on bare hardware.

one would expect that the effort would be made to take advantage of the hardware when necessary, e.g., SIMD (vector) or MIMD (multiprocessor, multicore) processing, depending on the class of problem, but it may not be necessary to have these all at once ... where it does, that winds up being applied as things like GPU compute offload, or high-end database clusters.

at least initially.

there's a story -- mostly attributed to Toyota -- where in the 80's, their engineers were designing the first cupholders, so they went to a local 7-11 and got every size of cup-like thing they could find, to make sure their design would work on cups in current use.

these days, the computational complexity problem has been inverted: cup designers have to make sure their cups work in the holders, not the other way around.

and perhaps not always for the reaons you might first consider.

the initial technology for A was first driven by B; now the technology for A drives the technology for B.

so, I suppose the answer is (like everything else) "it depends”.

GLTA,

R.

Share
New Message
Please login to post a reply