Find all your DIY electronics in the MakerShed. 3D Printing, Kits, Arduino, Raspberry Pi, Books & more!

In a recent episode of the very popular EEVblog by Dave Jones, the electrical engineer walks through the fundamentals of the Field Programmable Gate Array (FPGA). He explains how FPGAs work, how they’re different from microcontrollers, and their advantages and disadvantages. Myself, I’m still learning this stuff, so rather than trying to stumble through my own explanation, I’ll leave it to Dave to give you the lowdown on the venerable FPGA.

Matt Richardson

Matt Richardson

Matt Richardson is a Brooklyn-based creative technologist, Contributing Editor at MAKE, and Resident Research Fellow at New York University’s Interactive Telecommunications Program (ITP). He’s the co-author of Getting Started with Raspberry Pi and the author of Getting Started with BeagleBone.


Related

Comments

  1. Izzabella Sayer says:

    Neat idea could you use it to help with medical use in any way

    1. tim dolan says:

      I believe that FPGA’ are used in many commercial products from computers, phones and Medical equipment.

  2. Andy in Tucson says:

    If you stop trying to compare FPGAs with microcontrollers, you’ll be well on your way to understanding them.

    Remember: FPGA design _is_ digital logic design. That is the whole truth; the rest is commentary. (Now go study it!)

    The design concept is no different from when we put a pile of 74LSxxx-series logic devices on a board. You need to understand synchronous logic (prop delay, setup/hold, other timing concerns). Of course, the synthesis tool helps with logic minimization so you can stop worrying about K-maps.

    Or, think of it this way: you can implement a processor in an FPGA, but you can’t implement an FPGA in a processor.

    1. miroslava von schlockbaum says:

      don’t FPGA simulators run on microprocessors?

      1. digineer says:

        yes. slowly.

      2. Andy in Tucson says:

        FPGA simulation tools indeed run on microprocessors — I use ISim, Active-HDL and ModelSim. But you use a simulation tool to verify your logic design.

        And you should not discount simulation! It’s VERY important. It’s a lot easier to find problems in simulation than in the actual hardware, usually because you cannot possibly connect enough logic or scope probes to the system.

  3. digineer says:

    You tell a CPU what to do. You tell an FPGA what to be.

    1. Matt Richardson says:

      That’s the best way I’ve ever heard to sum it up! Thanks, I hope you don’t mind if I steal it!

      1. digineer says:

        Agreed. It’s the most concise summation I’ve seen as well. Unfortunately I can’t claim credit for it. Not sure where on the internet I first saw it.

  4. Marco OLIVO says:

    Let’s think about TWO scenarios: a first one, where a dedicated SoC (e.g. a microcontroller) tries to embed an eFPGA concept as a flexible IP to be provided to final customers for specific SoC dedications; a second one, where a standalone FPGA provides into its own library set more and more hard-wired IP (of common usage: on top of SRAM and uC, e.g. SPI, LIN, PWM, MAC etc.); my question is: which of the two scenarios will arrive first to offer a competitive solution for relative low-volume segments (1-10 Mu/year)?

    1. Andy in Tucson says:

      Marco,
      If you step back a moment and realize that the FPGA (and the SoC concept, which basically marries a microcontroller and FPGA into the same package) is by definition “application specific,” you would understand that your question doesn’t make much sense.

      While one might pull from a library of “IP cores” (and I hate that term) to use in an FPGA (or SoC) design, the compelling reason to use an FPGA is because you can design in exactly what you need and not have extra “stuff” left over that you don’t.

      Most FPGA (and SoC) designs are of the special-sauce variety — the product has specific needs for which a hardware solution is required. You do the design based on those needs and the product gets built and shipped. It is unreasonable to expect that a product’s requirements will change significantly once in production to warrant the sort of “flexibility” you’re considering.

      Put another way, consider that a product design uses a USB interface to the host. You design that in. In addition to the FPGA/processor, you have a connector and passives and probably power switching and all of that. Now the customer wants Ethernet. So do you initially design in with a processor that can support both interfaces, and do you provide the board space for the connectors for both, or worse do you provide the unnecessary Ethernet connector and magnetics (at additional BOM cost) in anticipation of a customer need that might never come?