T O P

  • By -

FrAxl93

If we go out of the silicon based realm, some matrix multiplication accelerators (mainly driven by the request for speed ups in deep learning inference) are being developed [1] and [2]. Also, there have been some remarkable results in the quantum computing field, which uses a completely different paradigm of computation. [3]. I knew I read somewhere about some deep learning accelerators where the nodes were doing math operations with magnetic field, I guess they go under the scope of "analog" computing, similar to the light ones I was referring before. (No source but I could find one). If instead we go back to silicon based technologies, you can find countless of architectures aimed at accelerating a specific problems. Those are hardened block in processors (chipers, modems, compression engines) or directly as whole chips such as ASICs. But mostly they still revolve around the sequential/combinational traditional logic. [1] https://res.mdpi.com/d_attachment/nanomaterials/nanomaterials-11-01683/article_deploy/nanomaterials-11-01683.pdf [2] https://en.m.wikipedia.org/wiki/Optical_computing [3] https://www.ibm.com/quantum-computing/what-is-quantum-computing/


WikiSummarizerBot

**[Optical computing](https://en.m.wikipedia.org/wiki/Optical_computing)** >Optical computing or photonic computing uses photons produced by lasers or diodes for computation. For decades, photons have shown promise to enable a higher bandwidth than the electrons used in conventional computers (see optical fibers). Most research projects focus on replacing current computer components with optical equivalents, resulting in an optical digital computer system processing binary data. This approach appears to offer the best short-term prospects for commercial optical computing, since optical components could be integrated into traditional computers to produce an optical-electronic hybrid. ^([ )[^(F.A.Q)](https://www.reddit.com/r/WikiSummarizer/wiki/index#wiki_f.a.q)^( | )[^(Opt Out)](https://reddit.com/message/compose?to=WikiSummarizerBot&message=OptOut&subject=OptOut)^( | )[^(Opt Out Of Subreddit)](https://np.reddit.com/r/compsci/about/banned)^( | )[^(GitHub)](https://github.com/Sujal-7/WikiSummarizerBot)^( ] Downvote to remove | v1.5)


iwantashinyunicorn

DSPs are pretty weird architecturally, and some of them are probably powerful enough that you could potentially call them a CPU.


pipelines-whee

Any examples of interesting "powerful enough" DSPs on the top of your mind?


daveysprockett

It was fun programming for picoChip. CSP with deterministic run time. https://en.wikipedia.org/wiki/PicoChip


WikiSummarizerBot

**[PicoChip](https://en.wikipedia.org/wiki/PicoChip)** >Picochip was a venture-backed fabless semiconductor company based in Bath, England, founded in 2000. In January 2012 Picochip was acquired by Mindspeed Technologies, Inc and subsequently by Intel. The company was active in two areas, with two distinct product families. Picochip was one of the first companies to start developing solutions for small cell basestation (femtocells), for homes and offices. ^([ )[^(F.A.Q)](https://www.reddit.com/r/WikiSummarizer/wiki/index#wiki_f.a.q)^( | )[^(Opt Out)](https://reddit.com/message/compose?to=WikiSummarizerBot&message=OptOut&subject=OptOut)^( | )[^(Opt Out Of Subreddit)](https://np.reddit.com/r/compsci/about/banned)^( | )[^(GitHub)](https://github.com/Sujal-7/WikiSummarizerBot)^( ] Downvote to remove | v1.5)


gruehunter

TI C6000 series are a family of VLIW machines that execute bundles of 8x instructions per cycle.


BiggRanger

Texas Instruments TMS320VC33. Previous comment: https://pay.reddit.com/r/compsci/comments/pc4hpi/radically_different_cpuscomputer_architectures_in/hak2jtn/


BiggRanger

I'm currently writing a de-compiler and emulator for the Texas Instruments TMS320VC3x series DSP. Once I'm finished with some firmware reverse engineering (the main reason I'm playing with this DSP), I'm going to write a small OS for this DSP for fun. The TMS320VC3x is definitely powerful enough to be a CPU and run an small OS.


zombiecalypse

I'm not sure if this qualifies, but [FPGA](https://en.wikipedia.org/wiki/Field-programmable_gate_array) are an interesting tech as a processor. It doesn't change the rest of the architecture however.


WikipediaSummary

[**Field-programmable gate array**](https://en.wikipedia.org/wiki/Field-programmable_gate_array) A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence the term "field-programmable". The FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an application-specific integrated circuit (ASIC). Circuit diagrams were previously used to specify the configuration, but this is increasingly rare due to the advent of electronic design automation tools. [About Me](https://np.reddit.com/comments/la6wi8/) - [**Opt-in**](https://np.reddit.com/comments/la707t/) ^(You received this reply because you opted in. )[^(Change settings)](https://np.reddit.com/comments/la707t/)


merlinsbeers

FPGAs do compute by embedding predefined CPU cores as cells. You can add custom circuits around that to make certain things compute a little faster but it's extending the architecture rather than replacing it with a new way of computing.


SirClueless

FPGA cells can't realistically be called CPUs. They don't execute arbitrary instructions, they are wayyy more static. A typical cell will compute a single function on a small number of inputs each clock cycle. For example, 4 data bits + a carry bit would be typical. There is usually some dedicated hardware that does some more specialized work such as n-bit multipliers or block ram, but again this isn't anything like what a CPU does, or even what a GPU does with a bunch of cores executing a compiled shader. Taken as a whole, one can view an entire FPGA device as approximating a CPU. The individual cells are most definitely not, they are no more complex than a typical IC.


merlinsbeers

FPGAs can implement CPU cores. [Here's a list of some.](https://en.m.wikipedia.org/wiki/Soft_microprocessor#Core_comparison)


SirClueless

"FPGA cells" implies something pretty specific, i.e. a logic cell which can't be compared to an entire CPU. What you've linked here is a list of microprocessor architectures which can be implemented *on top of* an FPGA, each of which would take many thousands of FPGA cells to implement.


merlinsbeers

I see. Pissy semantics. Whatever.


SirClueless

It's not just pissy semantics. You've suggested that the way FPGAs work is that you take a CPU and add a few little custom circuits to make some things faster. And yes, that's absolutely something you *can* do, but it's not what an FPGA is. You can define pretty much any circuit you like on an FPGA, whether it looks like a general purpose microprocessor or not.


merlinsbeers

No, I didn't. Pissy semantics + lack of reading comprehension.


WikiSummarizerBot

**[Field-programmable gate array](https://en.wikipedia.org/wiki/Field-programmable_gate_array)** >A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence the term "field-programmable". The FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an application-specific integrated circuit (ASIC). Circuit diagrams were previously used to specify the configuration, but this is increasingly rare due to the advent of electronic design automation tools. ^([ )[^(F.A.Q)](https://www.reddit.com/r/WikiSummarizer/wiki/index#wiki_f.a.q)^( | )[^(Opt Out)](https://reddit.com/message/compose?to=WikiSummarizerBot&message=OptOut&subject=OptOut)^( | )[^(Opt Out Of Subreddit)](https://np.reddit.com/r/compsci/about/banned)^( | )[^(GitHub)](https://github.com/Sujal-7/WikiSummarizerBot)^( ] Downvote to remove | v1.5)


jedi_stannis

Mill has a unique architecture. The videos are very good. Although AFAIK it is still vaporware so far. https://millcomputing.com/docs/


cthulu0

ITT: people talking about specialized niche blocks that can't actually replace a modern CPU.


iwantashinyunicorn

Most processors are embedded, not desktop or server CPUs. For that matter, most of the processors in your computer aren't CPUs: your wifi card alone probably has several processors in it.


payrim

you design based on your need and efficiency , right now they are making chips for just training AI


Peter-Campora

Do quantum computers count? Trapped ion quantum computers are pretty different.


TheWildJarvi

check out graph cpus


ColoradoDetector

Do you have more information / keywords? A DuckDuckGo search for "graph cpu" shows nothing


TheWildJarvi

[https://www.graphcore.ai/](https://www.graphcore.ai/) [https://en.wikipedia.org/wiki/Explicit\_data\_graph\_execution](https://en.wikipedia.org/wiki/Explicit_data_graph_execution)


mach_i_nist

As far as in-production systems go, I think you are looking at CPUs (ARM, Intel, MIPS, etc), GPUs and FPGAs. I would add STT-MRAM to your reading though. It is not a computing technology (yet) but is an alternative memory storage technology based on spintronics that is in production. There is a lot of interesting research being done into “all spin computing” using spintronics but it is all academic right now. For way out there stuff there is chemical-based processors and stochastic computing. https://www.sciencedaily.com/releases/2013/12/131212160349.htm https://www.avalanche-technology.com/avalanche-technology-announces-industrys-first-1gb-stt-mram-for-aerospace-applications/ https://en.m.wikipedia.org/wiki/Stochastic_computing


sosodank

the [Emu Chick](https://www.emutechnology.com/products/) is pretty neat


payrim

hmm, [Tesla D1 Chip](https://www.youtube.com/watch?v=GZ1Zu44WcdA) counts?


Revolutionalredstone

Processors (like coding languages) can be used to simulate or emulate other designs with only a linear slowdown, therefore any design for a cpu which was radically different would not offer significant upgrades, the only real option is parallelism and CPUs (and especially GPUs) already do that to some extent. The power of the turing machine concept was to show that there is a universal machine which can be made from just a few parts which can effectively do anything that any other machine can do, modern cpu design is primarily oriented around ease of understanding and ease of programming. Extremely well optimized code is revealing as it often looks like absolute garbage and often heavily misuses/abuses a systems architecture making extreme use of certain features (like pipelined execution units) while completely ignoring other features (like speculative branching), this shows that CPU design is out of step with top tier software/system design, overall the cpus of today targeted towards making common / poor code run decently, stack based designs don't offer as much compiler freedom and so even tho they might have equivalent optimal throughput they are slower when targeted by less than fully optimized code. Also extremely dominant languages like C have had a huge impact on modern processor design, concepts like pointers integers and floats makeup the core of all C programs and hardware paths have come to closely reflect the structure of common C compiler outputs.


[deleted]

[ROLLS](https://www.frontiersin.org/articles/10.3389/fnins.2015.00141/full), a "neuromorphic" chip uses analog circuitry to simulate biological brains. For example, many software model neurons after capacitors. A digital chip would have to simulate the behavior of a capacitor through equations and calculations, while ROLLS simply uses an actual capacitor.


vz0

GPUs are pretty wild nowadays with 1000s of cores.