The fact that one person can manage this (which would back in the day have taken god knows how many man years of dev time) gives me hope in terms of humanity's capability to somehow deal with the ever increasing technological environment we're enmeshing ourselves into. Thumbs up for a mega cool project!
One talented person. I don't think the typical CS grad would be anywhere near this level. This is pretty much a "10x".
But there are a few others too; here is someone who made an entire SoC (including CPU, GPU, and audio --- all of his own design) on an FPGA, plus wrote the software to run on it. And this is apparently his first FPGA project:
10x indeed. This is an example of that even rarer creature, the "unstoppable" engineer. Need an FPGA? I'll just learn verilog. Never made a pcb but need one? No prob, just learn kicad and figure out how to solder SMD. No docs? Just reverse engineer. No tools? Make my own.
This approaches Woz level engineering, if not 100% technically, at least philosophically.
I believe we're seeing the lever that the Internet provides produce results, in cases like this. Smart kids before the Internet had to dig up information in widely disparate sources; some of these skills and knowledge were the exclusive purview of universities and research institutions. You simply couldn't do some of this stuff if you didn't have access to a lab and someone to show you how to do it. The Internet has caused an increase in access to knowledge comparable to, and maybe even surpassing, the leap from the pre-Gutenberg printing press world.
In other words, the Internet is a multiplier that applies to the incredible curiosity and enthusiasm and intelligence of bright kids. So, multiply your 10x (or whatever #x) nerd by the 10x (or probably much greater multiplier) of the Internet, and you get the world we're living in now. Amazingly, the multiplier will likely continue to increase over time. I don't know if it will accelerate in a way comparable to the Internet coming along (I mean, we had centuries between early really big knowledge multiplier leaps), but it might.
We live in amazing times is what I'm trying to say, and I'm genuinely jealous of kids coming up today, despite some minor caveats. I was bright and curious, and lucky enough to have access to a computer (C64 first, later C128 and Amiga 2000), but the access to knowledge was still quite limited. Internet didn't come for me until I was an adult. Internet and a boundless curiosity and practically unlimited free time is heady stuff.
The internet is also full of distractions. There's a risk that the stimulation-rich environment will reduce the level of boredom that can instigate creativity, as I know it did for me - I had an out of date Commodore 64 (until I fried one of its chips) followed by an out of date Amstrad CPC464, with no software so I had to write my own.
Bingo. I'd summer at a farm where the only fun electronic item was a TRS-80. Apart from running outside or feeding the animals, that was my main entertainment. I spent hours writing adventure games for my cousins to play. Same, later on, in school, with TI calculators.
But my kids? Despite being "digital natives" don't even really know how to use computers. (Yes my failure as a parent, but also indicative of the environment.)
I think there's another thing "different" now vs the 80s/90s: The decline of good computer magazines, and despite places like hn, the relative scarcity of good technical resources to replace them. No Byte, No Dr. Dobbs, and while there are still a couple of magazines dedicated to Linux, I find their quality versus the magazines for the Amiga is a bit uneven.
Personally I grew up reading Dr. Dobbs, Byte Magazine, a slew of Amiga journals and Scientific American. I probably started reading the latter at a point where I understood maybe 40% - but it was a great way to learn English.
On the plus side, you can now go and play with and read the code from the likes of Bernstein, Percival, Torvalds and things like the entire Solaris system. And there are great resources, like the Arch Linux wiki, the FreeBSD documentation and dev/user lists. But the good stuff is increasingly hard to discover in an ocean of mediocre stuff. And I have yet to find any good canonical resource for "the good stuff". One of my biggest disappointments with college was that academics appear to be doing a terrible job of keeping up to date, or helping students seek out good extracurricular resources, despite the fact that these resources keep growing and growing.
I hope that the trend toward open publishing of research will continue, and maybe see some sister-resources in the form of free popular science resources pop up -- I could easy imagine universities and research institution pooling resources in order to help researcher edit and publish popular science articles in addition to their standard papers -- as a combination of general science education, and to rise awareness/advertise research.
I was thinking more along the lines of a meta-resource, ie: a site that publishes articles, reviews and notes on new books and projects. LWN.net is one such resource.
I do think it's important to point out that there are a lot of open books out there, facilitated by the web, like the haskell-book. But there's always been good books, it's not enough with out good librarians.
My first set of magazines were the Input series, a British monthly magazine about coding for 8-bit systems.
I invested the majority of my allowance in such magazines, "The C User's Journal" (later C/C++ User's Journal), Dr. Dobbs, PC Techniques, GDC Mag and lots of books.
Luckily my university being highly focused in systems programming, had a huge collection of books and SIGPLAN papers, which opened my eyes for the world of safe systems programming. Thanks to it I was able to delve into the work from Xerox PARC and ETHZ, as well as, the research that was happening with ML (Caml Light was rather new back then) and Prolog.
I had exactly the same start, with the Input series. It's thanks to that magazine that I even learned about assembly language programming, which although extremely low level and in many ways arcane opened the doors to understanding so many more things about a computer system.
I'm really worried about this. Not only as far as it impacts computer skills but everything.
I watched a stand-up piece/talk by Stephen Fry and he described how he read the complete works by Oscar Wilde when he was 14. Some books multiple times. I'm afraid that the odds of a 14 year old child will do that have gone down dramatically because of the informational and stimulation overload we are under today. "Smart devices" in particular I believe to be more of a curse than a blessing. I've been trying to limit my use of it and it's been great. Of course it doesn't always work, like right now.
How true! I couldn't afford a computer in 1977, so I bought a book in 1978 on how to program DEC PDP-11 in assembly language. I understood something for every 10 I couldn't grasp without a real computer. In 1978 I bought a Commodore PET with a cassette tape drive built into it. Learning was slow going. Even in 1995, I was still calling companies for information, unless they had a website (< 25,000 then). I've had the chance to do some cool stuff, because I tried hard to make those connections, whether it was writing a letter to a company for a sample-kit of their electronics products or something other. Now, my son, has made rockets with sugar fuel, build a robot controllable via the web, made magnet ski boots for walking upside-down on steel, etc.. all from YouTube and other sources. I think he is smarter than I ever was, but I also think the information availability (1 min. vs. 2 weeks for an answer by letter or phone), the amount of people publishing videos, how-tos and such, have greatly amplified the process. It is only a matter of time before we walk up the exponential curve of progress we are standing before today on a slight incline.
It's the acceleration that leads up to the singularity. Our current technological level allows anything that requires knowledge and materials just to happen at faster and faster rates.
This is soo true. When I was starting out, I had to save money to buy giant manuals, and then do drive bys of different bookstores to find one that happened to have the thing.. it was a nightmare.. like.. have an idea.. do a month of sourcing, and then maybe have the knowledge to pursue it.. now I research 10 topics in a day.. and have the luxury of choice! It's stunning.
I agree. I just got a great book, Electronics Principles by Malvino, on electronic engineering with especially analog focus. I was told to get another one to learn proper PCB design. There's others for Verilog. Unlike programming, most stuff in these has to be learned by trial-and-error with hardly any instant solutions a la StackOverflow. Graphics cards themselves are their own beasts with complex issues even for simple ones, esp integration w/ regular software.
So, as I'm reading this, I'm surprised how nonchalantly he describes what looks like a ton of headaches that's in front of me personally. Doing some basic stuff in Verilog on an FPGA is straight-forward. What he's done was not unless he found some excellent resources that basically let him cheat a lot in the learning. I'd be interested in them as I'm collecting resources with HW knowledge and wisdom.
For beginners, buying books on PCB design isn't worth the money. You'd be better off spending $50-100 on a load of sensors, designing the boards for them and getting them built. You'll learn far more than a book will teach you. There are lots of good Eagle and KiCAD tutorials online.
If you need to know about design, read application notes from manufacturers. There are thousands and thousands. Often they will dictate how you should route a particular component. You can look at reference boards from people like Analog, which typically come with schematics and layout files. Most of the big name companies have guides for layout:
What you will quickly find is that there's a lot of differing opinion on what is "right". Until you need boards with high speed digital/analogue/mixed, you don't really need to worry much about how things are laid out. You will almost always be constrained by more practical issues like enclosure sizes or where you have to put connectors.
I appreciate your feedback but it concerns me. It sounds like a lot of trial and error with components that cost money. One concern I have with that is it's dealing with electricity. What's the odds of getting components fried or yourself electrocuted just doing trial and error learning from tutorials and datasheets? And when you've done a bunch do you even really understand what you're doing when given fresh components?
"If you need to know about design, read application notes from manufacturers. There are thousands and thousands. "
I'll try to remember that.
"You will almost always be constrained by more practical issues like enclosure sizes or where you have to put connectors."
I'll definitely believe that. Thanks for the tips and different perspective.
If you make electronics, the odds of frying a component or electrocuting yourself is 100%, regardless of how educated or talented you are. The following happen to me on a regular basis, whether I'm working on high speed digital/RF or making something simple like an arduino shield, even though I learned as an apprentice under a brilliant EE:
* PCB with poorly aligned copper layers shorting the second the prototype is plugged in, usually destroying at least one chip. Lots of smoke and burning FR4
* Solder in $100+ high power transistors the wrong way. BOOM
* Using a counterfeit capacitor from shady vendor that either shorts internally or just plain explodes. Happens a lot when I need a really large capacitor and have to get it on short notice
* Forget to use a little extra flux and tin 'whiskers' form between freshly soldered pins that short them the second the device is powered up. This is so common that NASA has a whole website dedicated to the topic [1]
* Use wrong temperature profile or make the pin layout a thousandth of an inch too small or large and bam, two solder balls on a BGA flow together, requiring hours for reflow and reballing if you're lucky, and a new $2,000 FPGA if you're not.
Any nontrivial circuit is going to be impractical to simulate (and impossible to describe analytically as a whole) for all but the most well funded projects so I'd say 90+% of EE is trial and error, even for the most experienced designers. There's many rules of thumb and you develop an intuition for a wide variety of situations just like you do in programming, but it's just a fundamentally different field with different constraints.
Wow. That's some crazy stuff. So, the scary, trial-and-error is unavoidable then. Thanks for the feedback. Btw, I just recommended a few books here based on feedback from other EE's...
Any thoughts on them? Particularly, a combo of something like Malvino and Circuit Designer's companion to get a good head start on analog and PCB's respectively. Or do you have other references that kick ass in teaching practice more than theory? Gotta build up links for new people to accelerate hands-on part of their learning just like others did for programming.
Note: Art of Electronics is usually in my list but that link was for digital learner. Not sure if it's needed there.
What sort of circuits do you want to design? RF? Audio? Analogue is a big field! It's like saying "I want to write programs, which books do you recommend?"
I'm gathering information to help everyone out then organizing, cataloging, and sharing it. You could say it falls into some basic categories:
1. Enough knowledge to get designs working on a FPGA plus integrate that with other chips on a PCB. OSS HW with minimal analog.
2. Enough knowledge to design basic analog circuits for control and stuff. Alternatively, to design digital cell libraries as there's almost nothing available for academic toolbuilders.
3. The serious, mixed-signal shit that lets me do some parts in digital and some parts in analog where it handles it better. I've seen analog coprocessors with 100x performance at 1/8th power on ODE's and stuff. It also seems like certain signal processing or media codec tasks would be crazy fast/efficient in analog. I know high-end ASIC's make extensive use of such techniques. What tidbits I see in blog comments and papers can only be described as black magic without a more thorough resource. :)
4. RF books outside of ARRL that's been recommended to me. Need a lot of people experimenting with this stuff to reinvent things like TEMPEST that are classified. They need some good resources to get head-start.
So, those are some basic categories where I'm looking for both accessible, foundational material and cookbooks with heuristics. Being able to combine COTS components like MCU's and FPGA's on custom PCB's is major help to hobbyists. Being able to make the cells and basic, analog components required in about any ASIC in conjunction with tools like Qflow OSS Synthesis could get custom stuff going quicker. More thorough stuff for mixed-signal for its advantages plus to explore analog and digital interactions in digital systems that can screw either up. And RF for reasons stated.
Whatever you have. Drop it here or email it to me in my profile address. I'll keep circulating that along with others tips and resources whenever people ask.
I've never had electric shocks before, but frying components sure. Don't work with mains voltage directly if it worries you. You can power most hobby projects from a USB port or a wall wart. Designing power supplies is one area where you might want to read up on things like trace clearances, but again, look at YouTube for PSU teardowns (bigclivedotcom has plenty).
As always it's mostly human error. I have never (literally) fried a component from overheating, it still amazes me how hardy modern ICs are. I've also never paid too much attention to ESD protection, though if my job depended on it then I would. What has happened is shorts, often. Even the GPU guy routed his board with GND and VCC back to front, it happens to the best of us. Most chips are at least partially tolerant to silly things like overvoltage, so even if you accidentally short some GPIOs on a micro, the protection circuitry might save you. Simple advice is to put a low-current polyfuse on every prototype you make. It's saved me so many times when I've shorted power supplies by accident.
The more complicated your circuit, the more likely it will be that you mess up. Don't try and solder a 150-pin BGA on your first board. Build some breakout boards for sensors, build your own microcontroller dev board (ARM if you want a challenge) or pick a project from the internet.
Odds of messing up you first board in some subtle way? Unless it's a very simple board, > 80%? Components are cheap though, roll with it! Plus you probably won't brick all the components if something goes wrong, the magic smoke will usually only come out of one.
When I make mistakes, usually they're footprint (e.g. wrong pinout) or construction errors (shorts between pads, etc). If the schematic is incorrect then that's another issue, but most often it's things like not reading a datasheet properly and forgetting to connect a pin, tying an positive-enable pin to GND rather than VCC, etc. In principle layout engineers assume that the circuit diagram is gospel, so the blame doesn't always lie with them. Of course if you're the designer and the layout engineer...
I'm now at a stage where I can get a board back from the fab and it'll usually work :)
Just to add some: There's a lot of specialization in EE like any other field and you can't really dive into everything. I find power electronics to be more of a black art than RF or high speed digital so I try to never design my own. I've seen people spend years working on a single design for a solar inverter making dozens if not hundreds of iterations, testing with all types of switching circuits and chips, comparing the behavior of one vendor's capacitor to another, understanding different transistor behavior, and so on. Thankfully, nowadays you can go to TI and use their automated schematic generator for your power supplies. They have many reference designs and tons of documentation on PCB layout in a variety of situations. Debugging them, however, is a whole different story. If you can afford it just buy a module and never think of it again. Polyfuses are a godsend, especially if you're designing a high current PCB like a motor controller. It really doesn't take much to melt a 1.5 oz 10 mil trace.
How expensive trial and error is depends on your experience. When I'm done with using a board or have some old electronics to throw away, whether I bricked it or its obsolete, I always throw it in a pile. When I have time I just go through and desolder each nontrivial part because it helps build an intuition for how each type of solder will respond to heat and flux, how wick looks as it absorbs solder and how to move it to get all the solder without overheating the chip, and so on. You need to develop that muscle memory like a surgeon would because once you're good enough you can do crazy things like snake a tiny wire under a BGA chip under an xray to fix flaws in the design or reflow. In more expensive designs I'll regularly take small gauge wire and solder it all over to fix the design as well as cut traces or lift copper layers after stripping the FR4.
Hand assembling electronics is largely more craft than engineering.
Note: Look up the 2nd edition or something as I got it for $2. Principles of analog don't really change. Super-easy to read with more heuristics & diagrams than theories. I didn't know until today it was being updated.
Note: High-assurance engineer told me this guy was a master and this book is all-in-one most of what you need to know. A reviewer said it's mostly for digital, not analog, stuff but that's probably your goal.
Digital stuff is highly opinionated on whether it's "good" or not. So, read the reviews to determine if it's good for you personally.
Note: These were said to be nice for beginners on Verilog, etc. Good tutorials online, too, with practice code and help on StackOverflow. I suggest getting a simulator and/or cheap FPGA then just experimenting.
Note: Books on Digital System Engineering, VLSI testing, and functional verification to top it off.
Endnote: I don't guarantee quality of any of these except Electronic Principles and Circuit Designer's Companion. The rest just had positive reviews plus what I assessed to be decent information for their target topics.
I've yet to find any good intro-level PCB design references. There's plenty of good information out there, but it's generally not easily accessible, and is scattered around blogs, forum posts, open hardware sourcefile dumps, etc. There are some fairly decent videos from the likes of Dave Jones (of EEVBlog), and some of the Mikeselectricstuff(.co.uk) that talk and sometimes show the interesting parts of PCB design.
What's lacking (or I've yet to find) is a simple tutorial-type progression that explains both how, and why, to do certain things, especially when it comes to things like component selection, footprint design, part layout, and actual track routing.
The "try it, make mistakes, learn" approach is almost inevitable, but the cost/time of the feedback loop can be pretty steep for a hobby (you can have 24-hr turnaround on boards if you've the $$$, or you can pay $5/board in 6-8 weeks, but the middle ground is tricky and messy. Plus all teh different board houses might have different design rules and other things that complicate matters for beginners.
My suggestion would be to find some Open Hardware kits or designs, buy a couple to make them, but also get the design files (hopefully in either kicad or eagle, which are free/affordable, and then start by ripping up all the routing, and seeing if you can re-track all the components get get it to pass ERC/DRC checks. Maybe bounce it off a few people in /r/askelectronics or eevblog forums, and then have it sent out for fab. If it works, pick a new OHW project with harder things, and repeat :)
That saves you from having to deal with footprint, schematic capture, and BoM decisions initially, and focus just on teh board design parts. You'll want to get there eventually, and you can learn a lot from how other things are put together.
Maybe pick some simple circuits (basic CMOY headphone amp, say), and redraw the schematic in your EDA tool. Then gather up teh datasheets, and usually they'll have a recommended footprint drawing, and you can learn how to build those in your toolchain.
There are some open libraries of component footprints (I think sparkfun has one? Maybe octopart as well?) but it's a useful skill to learn, and easiest on little things with few pins before you work up to the 192 pin FPGA or something :)
* Always print your designs 1:1 scale before sending out for processing.
* Always check your design file outputs (gerbers) in an external viewer to make sure they look ok/make sense.
* Remember to flip top/bottom layers the right number of times! This should happen properly on export, but it's easy to screw up and have the perfect board if you can find components with the exact opposite pinouts :)
* Don't be scared of starting with surface-mount. You can usually get things in pretty big packages if you want, and an 1812 or 1210 is actually not far off the size of a normal resistor body (minus the leads). You can go smaller as you're comfortable and pick up technique.
* I can't really offer any layout tips, because I'm still not very good. If anyone has pointers to good docs on:
(a) how to decide where to place/orient your components on the board to begin with, (b) choosing placement/routing grid sizes and track widths/via sizing, (c) appropriate layer count/usage - when to add more layers, value of pwr/gnd planes, use of fills, etc. And lastly, with kicad especially, how to fix things when you've routed yourself into a corner with just a few pins left, but no way from A-B (preferably without either adding layers, or through-hole links. I know about the 0R resistor bridge, but it has its limitations.
Appreciate the insightful reply and advice. So, you're quite into it enough to probably read an intermediate or advanced text. So, if you will, try to skim this one from a "master" designer that several EE's recommended:
Lucky enough to find a free link. I want to see if your skimming shows that it would've taught any of the types of things in your post or some of the stuff you're trying to learn. Trying to gauge it's value in accelerating the trial-and-error process.
"The "try it, make mistakes, learn" approach is almost inevitable, but the cost/time of the feedback loop can be pretty steep for a hobby (you can have 24-hr turnaround on boards if you've the $$$, or you can pay $5/board in 6-8 weeks, but the middle ground is tricky and messy."
That's exactly my problem. A lot of smart people don't have the money for that crap. So, it seems the experimenting is inevitable, yet it's worthwhile to try to determine exactly what experiments with what components teach the most lessons for minimum dollars. At the least, good references that tell you heuristics for avoiding the worst issues. The other commenter listed a few I had never heard of that involved "BOOM's." It's 2016 and we still don't have comprehensive, accessible guidance on reducing explosions? Really? Haha.
FWIW soldering SMD is much easier and more pleasant than through-hole :) The difference is that it takes some small amount of learned technique and just slightly more equipment - namely liquid flux.
Depends on the size of the components though. I got to take a short SMD soldering course while I was in the military. And down to a certain size, it is indeed easy. But as miniaturization continues, and some point manually placing components becomes really hard - much like working with clocks.
True! But two things save us and make DIY SMD reasonable:
- With practice we can go down to surprisingly small components. It's less about precision work and more about having an intuition about heat flow and surface tension which let the components align themselves.
- When components are too small or simply impossible (or it is beyond my knowledge) to solder with an iron, we can always just use an oven :) Which is entirely reasonable to do either at home / in the garage as they can be build very cheaply, or locally at a friend's or a fab-lab type deal.
Soldering SMD is, in my experience, a lot simpler than soldering their through-hole counterparts - provided you've got a proper soldering iron, that is.
Board preparation, obviously, also is a breeze - once you've drilled holes for your vias, just place the board in a vice and off you go.
Also, added bonus when it comes to passive components - rather than having those pesky color bands (annoying for the chromatographically challenged among us), values are typically stamped on the components using numerals - so less chance of making a mistake.
Well, it is the subject matter of technical CS. Which means it’s also part of what many universities teach in CS.
We have a mandatory course for all CS students at my university where we have to design a full DLX processor in VHDL and run it on a simulator. Just so we can understand how these things work.
So, the average CS graduate from the uni where I am should be able to do this.
And this is just a normal university, I’d expect that the universities with better reputation are doing this on a completely different level even.
Certainly some of it can be done by regular, CompSci students. Yet, doing a DLX processor in VHDL on a simulator is not the same as implementing a CPU & graphics card on FPGA, integrating them, and especially doing a PCB. There's quite a jump there in necessary skills.
Btw, you're the first person I've seen mention DLX in a while. Feel free to pass this along to any you still know teaching that or working with it as they might find it interesting. It's a mathematically-verified, DLX processor from Verisoft project that's languishing in obscurity (that I'm aware of) rather than being exploited (or cloned) for its benefits:
Minor modifications like crash-safe.org for security with a RISC-V decoder in front & ChipKill ECC might make one, badass CPU in embedded use. Probably good for learning & OSS HW tweaking, too.
That probably varies from country to country or even university to university, but we did quite a bit of hardware stuff - from simple 7400 series stuff and op-amps, over PALs and CPLDs to simple processors on FPGAs and dimensioning circuits for ASIC fabrication processes.
In Portugal there isn't a CS degree as known in US.
It is usually called Software Engineering in Informatics or something similar.
In the universities where you have computing and EE degrees, you are free to mix and match computing and EE lectures for the optional credits that you need on top of the compulsory ones.
Personally I just did some stuff of digital design and ended up spending the rest in systems and graphics programming. But other others were focusing in EE lectures, for example.
This stuff is less complex than you might think. Building hardware is only really hard when you want to be Efficent. It's like using code in C# vs ASM.
For that matter, I did this sort of work for decades and, while I knew I was good at what I did, to me it was relatively routine and I would expect most EEs could do the same thing. No, it would not have been years but, without delving into the article (I just woke up), it might have been a few months. Maybe weeks if I was still on my game.
It's a very cool project. However back in the day I knew someone who made a half-height accelerated graphics card for MVME (68020 processors, VMEbus so similar era and processor). He pretty much did it on his own, although it was his full time job at the time and he had some support selling the boards. Also he used an existing (Hitachi?) GPU and some existing VME interface chip.
It's probably not common knowledge that Hitachi even made (and Renesas still makes, apparently [1]) graphics hardware. To the best of my knowledge, they designed the sprite/polygon rasterizer ("VDP1") for the Sega Saturn, and the off-the-shelf Hitachi/Renesas chips seem to be loosely related.
With the amount of content online, email, and a couple days per week of dedicated focus, you still can!
I think if you set a goal like "build x for y" and struggle through it, learning as you go, you'll learn faster because you skip all the stuff you don't need (because you literally didn't need it to hit your goal). And think of all the money you'll save!
Skip college. And if you already went, don't go back!
Most people who are successful as autodidacts are successful in college, it makes zero sense to avoid it if you're already the kind of person who learns well.
Most people are not great autodidacts, as shown by the miserable cometion rates of online courses.
My primary argument against college is mostly money at this point. The alternatives are so much cheaper in comparison to traditional college (both to produce and consume). The difference-in-quality argument is weaker and weaker.
I don't think free college for everyone fixes this either. It's like trying to solve the transportation problem by buying everyone ferraris. Free college is only good for colleges.
College is just a bad deal. A lot of people already realize this, but the idea of going to college if you care about your career at all is very firmly entrenched. 10-20 years from now, we'll look back at this period of going into debt just to get an education as being super weird and counterproductive. The people pushing the idea that this is somehow a good deal, very respectable people from respectable institutions, will be viewed as sub-prime pushers.
Way OT: Student debt, the war on drugs, prison, and medicine. These are the (US-centric) areas off the top of my head where people in the future will be like "what were they thinking?"
Meh, I'm an autodidact and I rarely complete a MOOC that I sign up for. There are many reasons for this. Quite a few try to just be copies of the campus course. Others focus too much on lecture and not enough on doing. In some courses, there is a huge disconnect between the difficulty of the assignments, and the content covered in the lectures. But the biggest reason for me is, I usually sign up for a course as a means to help me get going on a project. As soon as I'm up and going with the project, I leave the course behind. There are only a handful of courses where the content was so interesting and well paced that I stayed for the whole thing. The embedded electronics course at edx being one of them.
> Most people who are successful as autodidacts are successful in college,
Yes and no. I had troubles in college every now and then because it interfered with my learning. That is, as soon as I found something more interesting to learn by myself I tended to quit paying attention to classes. It was similar in school too. But I owe it my career - all things programming was what I was doing instead of school assignments.
To some extent, I think I had the opposite problem. University work harmed my enthusiasm for learning on my own.
Once I had to spend significant effort on my assigned work, I felt like I had to use my free time for leisure or I'd be wasting it. Plus any time I felt enthusiasm to learn something else, I'd feel like I ought to be spending that effort on my assigned work instead.
> Most people are not great autodidacts, as shown by the miserable cometion rates of online courses.
For me (though I consider myself as a pretty good autodidact) the reason why I didn't complete many MOOCs (before you had to pay to even get a certificate) was simply lack of time: My main "job" is completing a PhD in applied mathematics, so if there happened anything for my PhD work that required some time-intense intervention (say, I found out that my proof idea didn't work so well as I originally thought; or if the advisor got the idea (which does not mean that it was always a bad idea, just to be clear) that I should read up more papers in some specific area - also very autodidactic) I sometimes had no other choice than stopping some MOOC that I was attaining.
Generally, the stuff you need to get something done overlaps pretty well with the stuff you need to get other stuff done. Not 100% but enough you're probably not wasting your time.
It's at least as good a heuristic as "prof said it's important".
What will be the technology size of these printers? I suspect the result will be circuits operating at a very low frequency compared to what we're used to.
Yes, we live in a very interesting time. You can order a decent quality PCB from the likes of OCHPark for peanuts. You can fit an SoC capable of running a usable Linux into an FPGA well under £50. MCUs are cheap and powerful. Great time for a tinkerer.
> I checked eBay for the "big" Amigas I never had, expecting them to be cheap, but found out that these boxes had turned into valuable collector's items.
Story of my life. I had an Amiga, got rid of it. Had an H-11, got rid of it. Had a Digicomp I and II, mom got rid of them. All worth a big pile now.
My basement is full of old computers I stuck down their as I upgraded. All worthless :-)
I have an unerring ability to get it backwards what will be a collectors' item and what won't.
This is pretty great. I especially like the poor man's ChipScope that shows up on the display itself.
I'm sometimes surprised by lack of options for a small, cheap GPU that could be readily integrated into a design like this. There's seemingly a big gap between the classic VDP chips like the TMS9918 and its successors (which AFAIK have all been out of production for ~20 years) and modern GPUs that come in monster BGA packages, call for super-fast external RAM and dedicated power supply circuitry on 6+-layer boards, and can't be bought through mortal-friendly channels anyhow. FTDI has the FT800 series, and I'm sure there are still suppliers of S-PPU (the SNES graphics hardware) clones if you know who to talk to in China, but other than that it seems like the options are:
1) Roll your own in an FPGA, like this project
2) Get a fast, preferably multicore microcontroller with suitable I/O and program it to do rendering in software (maybe a Propeller, GA144, or some XMOS chip?)
3) Buy your GPU attached to an ARM core (i.e. Allwinner, Rockchip, Freescale i.MX, TI OMAP/Sitara, etc.) and program it to speak some GPU-oriented protocol (which probably adds a considerable amount of latency, since you'd realistically need to use the Linux drivers)
"F18A stands for “FPGA TMS9918A” (hence the project’s title) and will be 100% hardware and pin compatible replacement for a TMS9918A VDP (video display processor)."
The MediaQ MQ200 was perfect for this kind of thing, it had embedded DRAM and a flexible bus interface. It was BGA though, but so were things like the StrongARM CPU that it was typically connected to.
Not much market for them. For most applications, you're better off using a SoC with a built-in GPU and running your user interface code on the built-in CPU core(s).
How does one learn how to do this stuff? Low level programming interests me immensely but I can never find a good resource for how to learn. I know C, C++, Java, Javascript, Python, Objective-C, Swift, but the only reason I have been able to learn these languages and their technologies is because of the resources available. Can anyone point me in the right direction to become acquainted with low-level technologies/understanding?
When the author describes the Amiga as an understandable system, he's reasonably accurate. Even the lowly Amiga DOS manual documented the Assembly instructions and the in-the-box Microsoft produced Amiga Basic allowed fairly direct access to the hardware.
These pale though in comparison to the two Amiga Rom Kernel Manuals and the Amiga Hardware Manual. [1] These were sold in the computer section of the shopping mall bookstore back in the day, and even today provide a detailed explanation of a multi-tasking operating system and simple graphics programming.
[Edit] Part of the reason why people continue to hack on it is because it is practical. The this-is-how-to-turn-it-on-and-use-the-mouse Introduction to the Amiga 500, it had schematics in the back; pinouts for all the ports; and pinouts and block diagrams for the important chips.
There are books, courses, homework, and lots of free resources -- in particular, emulators.
Then find a project that interests you, and learn what you need to do for that. 68000 assembly? Forth? Graphics on a VGA card emulated in a VM running MS-DOS? All of these things are entirely do-able.
Or find a project that is already in development and contribute to it. The important bit is your own motivation.
I don't know if MMIX is that useful to learn honestly. It's RISC like, which is good, but the register saving convention is a bit weird and its not that similar to things that are already exist. Most assembly languages are pretty easy to learn (even x86), they just seem hard because they're so low level.
If you can program in C at the level of K&R, your first step is to work through Computer Systems: A Programmer's Perspective [0].
Coursera has a course called the Hardware Software Interface [1] which covers the same material if you want lectures and forums and the other benefits of a MOOC.
One fun (if not particularly practical) path is to learn the Atari 2600 video game platform.
Documentation is readily available, along with source code to a number of classic games. There are several emulators and even a small community of people turning out new games for the platform.
The 2600 was/is quirky, and of course it's ancient history, but it does provide a path to learning some basics that still apply today. Most importantly, it's fun.
Read the book Bebop to the Boolean Boogie. It's a fun, well explained introduction to computer engineering fundamentals. Also, the Art of Electeonics is a great resource.
If you use windows, I think Microsoft still has a DDK somewhere that contains the final version of MASM, their macro assembler. I assume there's a gnu option out there too. Get one of those and look on google for a hello world tutorial. Write some user mode stuff. Then try a boot sector / boot loader. Use a VM for testing and development. Get a book like The Ultimate PC Hardware Guide and start writing a hardware abstraction layer for your new OS...
> If you use windows, I think Microsoft still has a DDK somewhere that contains the final version of MASM, their macro assembler.
Even simpler: Install Visual Studio 2015 Community (the current version) and run the "VS 2015 x86 Native Tools Command Prompt" or "VS 2015 x64 Native Tools Command Prompt" (under Windows 7 in the Start menu under "Visual Studio 2015" -> "Visual Studio Tools" -> "Windows Desktop Command Prompts").
Now simply run
ml
or
ml64
(depending on the prompt that you opened). Voila - these are the current x86 or x64 versions (14.00.x) of the Microsoft Macro Assembler.
Neat! I've also had pretty good luck with nasm on several platforms, including Windows, Linux, OS X, and DOS. fasm also looks pretty neat, but I haven't done much with it aside from a few toys for Menuet OS. And then there's yasm, which I haven't used, but must be fairly popular given its inclusion in several distros.
There's GNU as (gas), which I've used quite a bit, but wouldn't really recommend because it uses strange "AT&T" syntax rather than the syntax you'll find in the Intel manuals. gas is also meant more as part of the GCC pipeline than as a standalone assembler, so even though it can function as one, it's not necessarily nice as one.
I've been meaning to play around with the LLVM assembly language. It looks neat, with the bonus of being reasonably portable, but I haven't yet found the time.
HLA (High Level Assembly) by Randall Hyde seems like an interesting way to slowly lower yourself into assembly language programming, but that's not how I cut my teeth, so I can't speak to its effectiveness.
> There's GNU as (gas), which I've used quite a bit, but wouldn't really recommend because it uses strange "AT&T" syntax
All AT & T syntax is, is
move src, dst
rather than intel's
move into dst, src
as far as I know, intel is the only company which did that, and to me, it is intuitive to move something somewhere, rather than to somewhere move something.
In Intel syntax instructions are not suffixed. In AT&T there is a suffix (q, d, w, b) depending on the operand size. For example (assuming 32 bit register) mov (Intel) gets movl (AT&T). Also the argument order is changed. Constants have to be prefixed by $. Hexadecimal values are prefixed with 0x instead of suffixed with h. Registers are prefixed with %. With this we already have
mov eax,1
mov ebx,0ffh
vs.
movl $1,%eax
movl $0xff,%ebx
But also the notation for accessing memory is different: For encoding the SIB (Scaled Index Byte) (+ disp[lacement], if desired), Intel uses [base+disp+index * scale], while AT&T uses disp(%base, %index, scale). Thus we have
Edit: On the other hand, when the size of the operand can't be concluded from the instruction, in Intel syntax you have to add 'BYTE PTR', 'WORD PTR', 'DWORD PTR' or 'QWORD PTR' to disambiguate the situation. For example
mov [ebx], 2
is not unique, so in Intel's syntax you have to write respectively
Coming from MOS 6502 / 6510 / Motorola MC680## background,
movl -4(%ebp), %eax
is intuitive to me. On the Motorola MC68000 family, it would have been
move.l -4(sp), a0
so I can hit the ground running with AT & T syntax.
The first time I saw
mov eax, [ebp-4]
I wanted to take a hammer and beat the idiotic IBM PC tin-bucket into oblivion. Did the square brackets have any special meaning? Who the hell knows! What do they mean?!? Completely unintuitive.
With AT & T syntax, I can even hit the ground running reading SPARC assembler:
mov 5, %g31
moves five to global register 31. Perfectly logical and intuitive, and I didn't even have to know anything about SPARC assembler.
Then there is the deadbeefh instead of $deadbeef or 0xdeadbeef syntax. Everybody else either used $deadbeef or 0xdeadbeef, but not intel, oh no! intel just had to be different. Irritating to no end. Again, taking a hammer to the PC bucket was a temptation...
Only intel could come up with something which does not relate to anything.
> Only intel could come up with something which does not relate to anything.
This is wrong. You have to remember that the 8086 is (mostly) source code compatible to the 8080 (at least if you rename some registers - a simple search & replace) - though not binary compatoble. The assembler syntax for the Intel 8080 that Intel developed is ugly. But Zilog who developed the Z80 (which is binary compatible to the Intel 8080) devised a much better assembly language (as far as I know they had to use a different assembly language for legal reasons). For the 8086 Intel built on the ideas behind Zilog's assembly language. In Zilog assembler
(HL)
(IX+index)
(IY+index)
is used for accessing memory. Now replace the ( and ) by [ and ] and additionally keep in mind that the function of the register pair HL in 8080 roughly corresponds to bx (a "register pair" consisting of bh and bl) and it looks a lot like x86 assembler (IX and IY only exist in the Z80 and not in the 8080; but despite that the syntax for indexed adressing again reminds a lot of what one is used from x86 assembly in Intel syntax).
EDIT: Also the parameter order dst,src is the same as in Z80 assembler (but this order was already used in 8080 assembler so rather Zilog copied this order from 8080 assembler).
TLDR: The Intel syntax is related to Zilog Z80 assembly.
> Intel's syntax is most common, not the AT&T one.
I'd be a little bit more careful with such a statement: Under Windows (and formerly DOS) Intel syntax is the common one, while under GNU/Linux and OSX the AT&T one is used.
> If you translate mov into =, the syntax makes much more sense than AT&T.
Though I prefer the Intel syntax, I'd be careful with "makes sense" here: According to http://stackoverflow.com/a/4119217/497193 people who grew up with MIPS seem to prefer the AT&T syntax since it is much more similar to MIPS assembler.
It has nothing to do with the operating system. AT & T syntax follows the same style as pretty much any other computer and processor in existence. (Exceptions exist, but they are exotic oddities.)
> Intel's syntax is most common, not the AT&T one.
No, intel is an exception, not the norm. Amstrad, Atari, Commodore 64, Amiga, Sun (both Motorola and (Ultra)SPARC) all use "move src, dst", $ or 0x... only intel diverges from the norm.
Others have given hints for how to get started on Windows. On Linux, assembly for x86_64 is actually rather pleasant, and it looks like a project that was announced on hn a good while back has been fleshed out quite a bit:
(Incidentally, I did a fork of the early stuff just to see what it would look like with gas syntax - the standard for gcc. Now that there's more code in the parent repository, maybe it's time to revisit):
https://github.com/e12e/asm/tree/gas-syntax/hello
For a long time, the best resources were (arguably still are) for 32 bit x86 assembly -- but IMNHO it's rather unpleasant in terms of segmented memory, limited registers, and the C call abi is also a bit "clumsy" if you want to work in assembly as much as possible (again, this has to do with limited registers). That said, Randall Hyde has some great resources on 32 bit assembly:
An older resource, for 32bit assembly:
http://www.drpaulcarter.com/pcasm/
I have an old print-out of this on my bookshelf - it's a pretty straight-forward introduction to how one might go about to combine C and assembly. It'd be great to see an updated version for 64bit though.
An operating system in assembler. 32bit version GPL, the 64bit version is as far as I can gather under a more restrictive, source-available license:
I would recommend going through the first tutorial listed above, to get a feel for things. Assembly can be a lot of fun - but I think you'll in general will have a hard time doing better than gcc/clang if speed is your goal. Still useful for stuff like boot loaders and such. If you want to play with OS/boot-loader development, it is easier now than ever before thanks to emulators like Qemu: http://wiki.qemu.org/Main_Page
I found host read timing to be significantly harder to
get right, and it's still not perfect. Host reads can't
be put in a queue and have to be processed immediately.
This interrupts the fetching process; if the host needs
to read from a totally different memory location then
the display engine is fetching, the SDRAM has to close,
refresh and reopen rows, which is a time consuming
process. So far, reads from the host cause temporary
distortions (glitches) in the display output. I plan to
mitigate these problems using either burst mode reads
and fast switching and/or inserting CPU wait states via
the XRDY signal.
... why not just maintain two copies of the frame buffer, one onscreen and one off? Your hardware FSM then has a much simpler job: make the real VRAM contents match the shadow buffer.
In other words, instead of tackling the problem with 1988-era engineering, take advantage of the free parallelism and essentially infinite (from a 1988 perspective) memory that you get with an FPGA board today.
Hell, there's probably enough BRAM on that Spartan6 to implement an Amiga frame buffer.
You are thinking write glitches - this is what double buffering solves.
Read glitches happen when two reads collide, and without any delay mechanism you need to choose which one to drop on the floor. What you do to solve this is implementing busy wait on external bus (/DTACK on amiga), you can also read ahead video output data in chunks (think filling cache line using SDRAM Burst mode) giving you more memory controller time in between.
This is why back in the day different graphic cards delivered different Doom1/2/quake1 fps, while intuitively there should be no difference games being software renderers.
To my eyes, this looks insane. Very impressive, the kind of post that makes me feel both jealous and inspired! It looks like it could have taken up more than just some 'free time', was it truly contained after hours?
I'd love to hear what tricks you (author), or anyone else, has for making free time to explore side projects. I'm finding it harder and harder to contain work coding; it seems to consume all available time!
Basically everything is known about the Amiga - people have reimplemented compatible hardware on an FPGA (https://en.wikipedia.org/wiki/Minimig). Every piece of code that runs from power on has been examined in detail (there's even a free software from-scratch version of the boot firmware). The Pi still requires closed boot code that runs on the GPU in order to bring up the application processor, and that still isn't well understood.
We know how the platforms work in general, but 386-era is when SMM appears for the first time and so it's unlikely that anybody has full knowledge of the entirety of a specific system.
Suddenly there's a spike in demand for Amiga 2000s on ebay...or maybe it's just me, I immediately opened a new tab and started searching for one to buy.
Wow, I'm really jealous of this. I never even had a 2000, I had a 500 with a side expansion that could use an a2000 card, for extra ram, since there was no HD.
I always wanted to do hardware work back then, but never quite had the right opportunity going through school.
So cool to see this, that a500 was how I learned so, so, much about computers that I love that it's still a tool for people to learn with.
Far be it from me to question other people's choice of their side-projects, but I sometimes wonder what these geniuses
would be capable of if they applied their talents to problems whose relevancy is more ... agreed upon ...