Computer Science Papers Every Developer Should Read
Posted by milanm08@reddit | programming | View on Reddit | 108 comments
Posted by milanm08@reddit | programming | View on Reddit | 108 comments
elebrin@reddit
I’d like to see some of the classics like “goto considered harmful,” “Reflections on trusting trust” and…. Anything at all Dijkstra wrote.
blind_disparity@reddit
Let me save this post so I can never read it again
Heffree@reddit
I’m back to read at least one.
TimedogGAF@reddit
Let me save it in a place that will make sure I will see it again soonish, because it's not properly organized. Then when I see it the 2nd time I'll STILL not read it but I'll make sure to place it correctly in my deep folder hierarchy black hole just to make 100% certain it NEVER gets seen a third time.
barackobamafootcream@reddit
Thanks for reminding me to save so I can never read again
deepthought-64@reddit
Damn dude, that hits too hard...
imachug@reddit
Something I wish more people realized is papers aren't significantly different from articles they read online all the time.
There's an assumption that papers contain lots of hard data, complicated math, and three dozen references to papers from 1950. But you're just as likely to find a paper with an accessible introduction into the topic, hand-waving for intuition, and modern language. As far as I can see, almost all papers linked in this post are of the second kind.
What I'm saying is, don't let a LaTeX font affect your judgement. Try to read papers as if they were posts from r/programming, just more decent (/hj).
Successful-Money4995@reddit
A lot of papers are garbage, though.
I think that the authors try to intentionally sound learned in order to impress a professor. Just speak plainly to engineers
I can't stand papers that invent their own pseudocode in order to demonstrate an algorithm. Especially now that we have high-level languages like python, it's often just as brief to write python as whatever pseudocode the author invents. I think that the authors use an invented pseudocode to avoid having to write code that actually compiles and works. Because writing code that works is harder but waving your hands is easy.
LaTeX is not good. Programmers left it behind for HTML and then for markdown. Reading markdown is way nicer than the LaTeX format, so I can click on links easily. Also, we can use colors and fonts. Miss me with those grainy graphs, give me SVG.
And the LaTex paper is probably behind some annoying paywall, too.
I read them because I have to but it's an archaic format and we should all just move on.
JarateKing@reddit
There are a lot of complaints to be had with LaTeX, I've got my share. But most LaTeX papers I've read from the past decade or two has natively supported clickable links, syntax highlighting, colored high-quality graphs, etc. The main competitor is Word, and LaTeX's output is miles better.
The stuff you're describing sounds more like a problem with scans of old printed documents, not something inherent to LaTeX, nor something that'd be fixed by putting it into HTML or markdown (which is so intentionally limited that it wouldn't even support all the basic formatting you'd want in a paper).
HankOfClanMardukas@reddit
Are you printing web pages or magazines? Nobody needs LaTeX for anything but industrial printing.
New_Enthusiasm9053@reddit
Markdown cannot do basic maths. It's a complete joke to suggest it as an alternative. Even Word would be better despite it's severe limitations.
gyroda@reddit
Markdown can't do pretty much most things.
I love markdown, I use it all the time and it's a great format for things like notes, comments, readmes and so on, but all it can do is
And maybe a few other things I can't recall off the top of my head
Images and syntax highlighting are extensions. Text size is down to the reader, you can't control that from the source. You can't control document shape or text flow or anything - it's a PITA because a README can often be too wide to read if I open it in my IDE.
In no way is it a truly comparable to Latex.
Successful-Money4995@reddit
For computer science, I just write the math in python or c form. I can't express an integral but rarely do I need one anyway. If I really needed it, there are websites that will convert an equation into an image for me.
New_Enthusiasm9053@reddit
Lots of people use integrals for lots of stuff. A paper that describes which algorithm to use will need to display both the maths and the code for starters.
Sure, there are terrible workarounds. It's just less productive and less readable than using Latex directly.
Latex has many flaws but it's output isn't one, and it's productivity issue is not caused by it's maths support.
JarateKing@reddit
I use it for basically anything Word might be used for. Not even talking about academic papers, we're talking about reports, standalone documents, serious writing, my resume, etc.
Do I need it? I could probably get away with Word just fine, that's what most people do. But LaTeX has nicer output, you can do a lot more via packages, the workflow with defining commands and composing tex files is extremely nice, and it works better under source control. There are pain points (ie. referencing images by filename instead of pasting directly in the document) but I don't want to go back to Word.
imachug@reddit
This... isn't really how it works. Many CS papers are complicated because they're intrinsically based on complex mathematical topics.
IMO, using pseudocode is actually good in many cases because it doesn't force the author to tunnel-vision on a particular implementation method. Pseudocode allows authors to abstract this complexity away and leave the choice to implementors. Speaking as the latter, this is useful because I can directly grasp the idea and realize it the way I find optimal, instead of trying to read someone's attempt at writing a "working" 200-line-long Python snippet.
This is just wrong. Idk, "coders" might have switch to HTML, but LaTeX is popular in CS for a good reason: it handles math way better than HTML (or MathML) can ever aspire to. Markdown might be fine for pure-text data, but anything containing math necessiates LaTeX, at least for formulae. Also: LaTeX has colors, fonts, links, and vector graphics.
Many, many papers are published on arxiv. Also, I know this isn't what you're looking for, but SciHub exists.
Successful-Money4995@reddit
I've read papers that seemed really complicated and went into a bunch of math but didn't need to be. I've felt that it was trying to make the paper seem more impressive. "What if we solve well-known problem X with well-known technique Y instead of the usual, well-known technique Z?" Citing a reapplication of known stuff perhaps sounds less impressive than a novel technique so everything gets derived from first principles and then halfway through the paper I'm like, oh, this is just a heap or a prefix sum or whatever. I can't tell if the author is intentionally trying to seem fancy or if the author actually didn't see that this is just a new arrangement of well understood building blocks.
Maybe the pseudo code that I'm reading in papers is different from what you're reading? I'm usually able to convert the pseudocode into python and it ends up pretty much the same length, but without all the fiction. Like, I see pseudocode using a subscript to extract bits x...y from a variable and I can never tell if that is inclusive or not, is the lsb 0 or 1, etc. So there's a description in the text explaining all that. Or just write it in python and you don't need the explanation in the first place.
Converting to another language is not a real issue. It doesn't have to be python. Just don't needlessly invent a language that doesn't have a formal specification and/or a compiler.
I suppose that all my exposure to LaTeX is in PDF form so it's not a fair comparison. I still find markdown a lot more approachable.
Immotommi@reddit
This is such a naive take. LaTeX is excellent, HTML is excellent, Markdown is excellent. They have different roles.
LaTeX is for formal typesetting and is unmatched in the class, but for casual purposes, it is unnecessary. Using markdown for proper typesetting is like writing an operating system in JavaScript.
The hyperref package in LaTeX makes links work both internally and externally, whether it is urls opening in browser, or jumping to equations and sections from references to them. In my thesis, I also used backref in my bibliography so that the sections page where each reference is cited in text is linked.
Colour and font support is all there (as long as you aren't compiling with pdflatex). Vector graphics are supported in both pdf and svg formats.
In addition, the massive range of templates that have been created make jumping into LaTeX much easier than it might otherwise be. There are some issues with it, make no mistake, but please don't just paint it as not good just because it doesn't suit your use case
catch_dot_dot_dot@reddit
Computer Science != programming. It's closer to maths. There are papers in the Software Engineering space, which do come closer to programming. There's space for all these things, I just don't think we should disregard or dilute the field of CS.
JanB1@reddit
One problem is that many/most papers are locked behind a (journal subscription) paywall, and those generally are prohibitively expensive. At least for me, that's the reason why I don't generally read papers. Same with standards which are locked behind a paywall. It's a really weird/broken system.
imachug@reddit
SciHub and libgen are very helpful here, FWIW.
JanB1@reddit
Both of which are not legal in a strict sense. So, if you're reading those papers for your job, you might get in trouble.
imachug@reddit
Copyright restricts reproducing works, not consuming them. Reading "stolen" papers is legal, ethics nonwithstanding.
I never said that wasn't the case. But restricting your sources of information because of that sounds like an odd decision to me.
hornbygirl@reddit
this depends on jurisdiction - to my knowledge, consuming copyrighted works is legal in the US (not a lawyer), but that is absolutely not the case everywhere.
JanB1@reddit
Consuming copyrighted works would include downloading those said works, no? I think that's not legal in some countries.
R1chterScale@reddit
iirc, SciHub has a built in reader, so argument can be made regarding that
Otek0@reddit
It’s still being downloaded to your computer to render
EndiePosts@reddit
Don't @ me for this because it's not my legislation, but I believe that the DCMA would view downloading and viewing a copyrighted work as making a copy of it (on disk or in memory). Pretend I posted the "believe it or not, straight to jail" P&R meme at this point.
WarOnFlesh@reddit
in what place in the world is it illegal to read something that is stolen?
JanB1@reddit
Depends on what you need to do to read it. I think if you download copyrighted works to read them, it can be illegal.
qrrux@reddit
If you’re reading papers for your job, your employer should have no problem paying $20 for a paper.
JanB1@reddit
I think subscriptions for journals are a little more expensive than $20...
qrrux@reddit
The journal is. The paper often can be purchased as a one-off.
ilumsden@reddit
Thankfully, most CS subdisciplines are moving more and more towards open access. In fact, ACM is currently moving to a fully open-access model, and they plan to be done by the end of next year: https://www.acm.org/publications/openaccess#acmopen
hefty_habenero@reddit
One of my CS masters classes was operating systems, we read and reported on 4 seminal papers a week for the semester in chronological order. Amazing to take the time to do that and get the detailed history. I don’t remember a damned thing about them in particular but I feel like it honed my intuition long term.
juhotuho10@reddit
some papers do contain lots of hard data, complicated math and up to hundreads of references. It mostly depends on who the research is written for and who wrote the paper
imachug@reddit
Yep. That's what I usually read, because it's more relevant to what I'm currently doing than the opposite, but some people prefer the latter, so there we go.
dacjames@reddit
Why are we encouraging reliance on "Why Functional Programming Matters?". It provides little evidence for it's claims, most of which have not proven out in practice in the 35 years since it was written.
Functional Programming, especially lazy evaluation, have not been demonstrated to be easier to learn. The only study I've seen with hard data (sorry, this was many years ago during undergrad, I don't have a link) showed the opposite: procedural programming is easier to learn than functional programming. The paper says higher order functions and lazy evalution should be the primary vehicles of modularizing code but provides no evidence. They don't survey developers. They don't compare and contrast implementations between paradigms. They don't analyze code quality metrics. The only argument made is rhetorical, not scientific.
They encourage use of linked lists, which we now know are usually not the best data structure. Certainly not as shown in the paper. Lazy evaluation at the language level has come and mostly gone. It is still utilized in I/O contexts but using languages with strict evaluation. Strict evaluation is easier to reason about, more efficient to implement, and it's easier to apply laziness selectively on top of strictness rather than the other way around.
I get that it's dated and research is expected to evolve over time. It is a product of a time when the "sufficiently smart compiler" was a real possibility rather than a holy grail. But we should contextualize it as such and emphasize that many of it's claims have been refuted by time. FP has, in the aggregate, not mattered in the way argued. It's arguably biggest impact on PL design generally, immutability, is not even mentioned.
gitgood@reddit
This comment seems to be getting a lot of attention without much discussion, so I'd like to throw my hat in the ring.
Where in "Why functional programming matters" was this argument ever made? I've read it a few times over the years and can't remember it ever making the claim that functional programming (especially with laziness) is easier to learn. The central thesis is that modularity is essential to building successful software, and that functional programming with hof/laziness are good mechanisms to achieve this. Whether you agree with the thesis or not there's no claim as I see it for FP being more pedagogically suitable than procedural languages.
You can see how it's a bit intellectually dishonest to discredit a claim the paper never made by citing a paper you can't even remember.
I think this is an incredibly uncharitable and incorrect reading of the paper. Nowhere does Hughes use any language even vaguely as strong as "this should be the primary vehicle of modularizing code". The exact quote from the abstract is "...functional programming offers important advantages for software development."
This meaning that functional programming as described in the paper offers these advantages, but if you're unconvinced you're more than free to ignore it. There is no hard mandate here, and it's definitely nowhere as strong as your assertion.
I think this is actually a crucial flaw in your argument. If there was a strong assertion that this should be the way things are done, I'd 100% agree that this claim needs supported with much more than a rhetorical argument. But that's not what is happening, it's much more in the tone of "I think FP is beneficial for these reasons, let me show some examples". You're incredibly defensive over nothing, which is a reaction I see quite often towards FP from C/C++/Golang developers.
I'm not going to ramble any more because this is already long enough, but the two other things I'll touch on are:
He does in a way address immutability, just indirectly in talking about referential transparency (the former enabling the latter). Miranda is purely functional by default, I suppose he thought that immutability went without saying.
You seem to think that laziness as a concept has been completely relegated to the sands of time and never mentioned again after the writing of this paper. This couldn't be farther from the truth. Plenty of strictly evaluated languages have incorporated lazy concepts, specifically around iterators/generators/streams, etc etc. This is left as an exercise to the reader.
EndiePosts@reddit
I work with both Scala and Java so either have no dog or both dogs in this fight, but it is interesting that to defend the paper you mainly concede all of the arguments of the comment that you respond to, by saying, in effect, "yeah but the paper never denies that..."
gitgood@reddit
I don't concede anything, I'm highlighting that the person I was replying to strawmanned an argument against a paper by misrepresenting it to the point that I don't believe he's even read it. It's not "the paper never denies that" as you've said, it's that the paper never claims that. There is a huge distinction here.
This isn't a FP vs procedural fight like you seem to think (by bringing up the languages you work with). It's someone being opinionated on a paper they haven't read (or read very poorly) versus someone that has read it.
Both you and the person I was originally replying to demonstrate very poor reading comprehension.
EndiePosts@reddit
Rich irony that you ad hominem freely and accuse everyone you disagree with of reading poorly or lying about reading and the like while not even comprehending that I mentioned the languages I work with purely to stop people like you ad-homineming by accusing me of making a statement due to preferring one model or the other.
Eheheehhheeehh@reddit
What the fuck, you are insane
Anth77@reddit
Username checks out.
dr_wtf@reddit
I haven't read that paper, but I'll check it later. The important thing about functional programming - specifically when that is defined to mean that all functions in the main loop are pure and all data is immutable (there are many other definitions) - is that it's easier to reason about. That doesn't mean it's easier to learn, it means it's easier to audit program behaviour for correctness, given a reviewer who already fully understands the language.
The most important property of a pure functional program is referential transparency. If you have that, lazy evaluation is possible. But it can also just be left as an implementation detail, because lazy and eager evaluation are equivalent in the absence of side-effects (though real-world performance can differ a lot). Immutability is just another side-effect that happens to arise from the same property.
TheBanger@reddit
It's not true that "lazy and eager evaluation are equivalent in the absence of side-effects". All programs without side-effects that terminate when evaluated strictly will also terminate when evaluated lazily (or some other form of non-strictness). But some programs that terminate when evaluated non-strictly will not terminate when evaluated lazily. For instance:
fst (1, undefined)
is guaranteed to return1
in Haskell but it would returnundefined
in a strict language.This isn't just some academic distinction, it matters on a day-to-day level when writing code. It's extremely common in Haskell to use infinite lists and other expressions like that that can be partially evaluated but would cause the program to hang if evaluated strictly. Using Java at work I regularly find situations where I can't express something quite as cleanly because of strict evaluation.
dr_wtf@reddit
That's a fair point and I was glossing over infinite sequences a bit under the heading of "real-world performance can differ a lot", since running out of RAM by trying to eagerly evaluate infinity is generally not considered optimally performant.
jandrese@reddit
My impression is that functional programming is easier to learn if you are coming into programming from a mathematical background, which many academics are.
agumonkey@reddit
FP should be taken with care (saying this as a "fan"), especially "intermediate" FP (70s era idioms and structures) because it's often not the best answer for high performance and as a learner it's easy to get stuck before understanding this.
Back to laziness, the fact that it doesn't enforce order improves modularity in a way, the logical relationships cause the structure, not the position in the code or the time it's executed. my 2 cts
avinassh@reddit
No shade to OP, but this article looks like someone just googled list of papers in some domain and made a listicle. I bet even OP hasn't read those papers, because none of the sections explain why or provide worthy insight
emurange205@reddit
On page 2:
mysticreddit@reddit
Missing IMHO:
Tubthumper8@reddit
Are these computer science papers?
mysticreddit@reddit
The first is, the last two aren't.
What defines a canonical computer science paper? Something that is published?
Who decides what is a canonical computer science paper?
Not every whitepaper is about theory. Concepts and applied knowledge is JUST as important.
IMHO half of the papers in the OP's list are outdated and next to useless:
The What Every Programmer Should Know About Memory paper should have been listed first IMHO.
Where is are these classic whitepapers?
IMHO even Quora has better answers are far better then this list.
TL:DR;
Actually, scratch that, start here: List of important publications in computer science
Links removed due to reddit censorship.
catch_dot_dot_dot@reddit
I think this shows you how often people read papers 😂
k1v1uq@reddit
my goto FP Arithmetic explainer video https://youtu.be/zzSmQvMOrm4
aholmes0@reddit
Add addresses and email
mysticreddit@reddit
Oh nice! I haven't seen those before. Thanks!
frud@reddit
A Pixel Is Not A Little Square, A Pixel Is Not A Little Square, A Pixel Is Not A Little Square! (And a Voxel is Not a Little Cube)
lolwutpear@reddit
I feel like that one really didn't age as well given our transition to digital camera sensors and LCD/LED-type displays. Obviously still super appropriate to computer graphics. There really is a little photodiode and some transistors packed into a little square, and you can collectively refer to them as a pixel, even if he might say that all they are doing is sampling some light in a plane of space.
aholmes0@reddit
Given this was written in 1995 when CRTs were common, I wonder whether this is still true. LED and LCD screens are physically squares, not projected waves.
aePrime@reddit
As a professional graphics programmer, yes, the concepts are still correct. You’re representing a continuous function with discrete samples. In an ideal world, you can reconstruct the original continuous function with enough discrete samples (the Nyquist limit is one name for this: you have to sample at twice the rate as the highest frequency). Unfortunately, we can’t quite get there because graphics usually have many discontinuities, but it’s still necessary to improve anti-aliasing.
Fun note: the term anti-aliasing comes from signal processing, where if you don’t have enough samples, your signal looks like a different signal (aliases).
aholmes0@reddit
Thanks for the info. I remember from working in print advertising, vector graphics were very important for what you're describing. It matters for digital rendering and for transferring to print media. But when we're referring to something that is physically a bunch of squares, treating a pixel as such seems like an accurate model. It means we end up with anti-aliasing algorithms that color adjacent pixels as averages of adjacent pixels. So I wonder, are you drawing a distinction between representing and storing graphics information versus rendering?
aePrime@reddit
I will first admit that I am not an expert on display technologies. I'm an expert at making little dots of various colors.
I will approach this as a rendering engineer: we store our final data in images. Images have no concept of an area at the pixel level (no matter what zooming in in Photoshop likes to tell you with blocky representations). Images are an array of infinitesimally small sample points spread over a continuous function. We have to filter to accurately represent the continuous data in the sample points (pixels).
At display time, a monitor, or whatever display device is used, has to take these samples and try to reconstruct something that looks continuous, even though display devices aren't continuous either. As I said, I'm not an expert in this domain, and it will vary from device to device, but they have to filter, too, to make this leap. If they took the pixels and mapped them to rectangles, if they display rectagles, you would get jaggies as there are sudden leaps in values. You may say that we can ignore these jaggies if the rectangles are small enough, but even that assumes a one-to-one mapping of pixels to display elements. Everything is scaled, and the scaling requires filtering.
We're sampling continuous data: a signal. To reconstruct this continuous data, we have to use the Fourier Transform. A box filter (a simple mean over an area) becomes the sinc function under the Fourier Transform. If you apply it again, it becomes a box again. Ignoring discontinuities, we should filter each sample with the sinc function to represent the continuous space accurately. However, this isn't what happens in practice because the sinc function has infinite extents, so we in the graphics community generally use filters that "look" like sinc.
aholmes0@reddit
Unrelated to this thread, I think it's cool that square waves are sinusoidal. I didn't realize this until looking at them with sufficient resolution.
Phrodo_00@reddit
A lot of LCD and OLED displays (and cameras for that matter) don't really have perfect pixel geometry where all of them have the same amount of red, green and blue in a consistent order. A lot of them are PenTile or similar, which causes a similar gotcha.
PurepointDog@reddit
They're smaller these days though, so less effect
Benabik@reddit
It's more about generating pixels from images, not the display of pixels. There is a CRT-dependent section near the end, but somewhat similar notes can probably be made about LCDs and things like sub-pixel rendering.
aholmes0@reddit
Gotcha, thanks. I suspected this.
Mysterious_Panorama@reddit
Or if you’re into longer form, more general audience work, Alvy’s book A Biography of the Pixel.
k1v1uq@reddit
Everything regarding the “Expression Problem”
in particular “Object Algebra” if you are an OO programmer
https://www.reddit.com/r/scala/comments/1hsokm1/from_object_algebras_to_finally_tagless/
agumonkey@reddit
i love this, but have you ever worked in a team following this paradigm ?
k1v1uq@reddit
yes, tagless final is pretty common in functional programming.
And Object Algebra is essentially the OO equivalent of tagless final.
Combine it with Akka and you got yourself a solid foundation for scaling an application in all directions without breaking changes.
agumonkey@reddit
interesting, gonna read about it
Scavenger53@reddit
FTFY
Putrid_Wishbone5203@reddit
Until this very moment I always thought FTFY mean Fuck Them and Fuck You.
HyperionSunset@reddit
Congratulations on becoming one of today's 10,000: https://xkcd.com/1053/
chicknfly@reddit
I have always enjoyed XKCD, but I love this particular one.
And the Don Quixote philosophical one FWIW
Bobzer@reddit
FTFY
agumonkey@reddit
fuck that fuck yeah
faitswulff@reddit
To be honest, I like yours better.
RovingN0mad@reddit
To be fair reading is hard
srpulga@reddit
found the reader guys!
__Maximum__@reddit
You have no idea how much crap I read on reddit every day
Piisthree@reddit
Upvoted this without reading. Someone mind giving me the jist?
Aceofsquares_orig@reddit
A bunch of words and characters. Some mention of computer science.
izhy@reddit
Do you have some sites where I can find more articles like that? ( except google scholar and other scientific articles aggregators)
mysticreddit@reddit
List of important publications in computer science
chesterriley@reddit
The best books I've read is "Software Tools" "The UNIX programming environment". But stuff like "Mythical Man Month" and "Cathedral an Bazaar" are important too. The OP's list is not something "every developer" needs to read. I kind of got bored just reading the titles.
McUsrII@reddit
I think the "Parnas paper" is something at least every developer should read.
I'll add "Philsopy of Sofwtare design" to your "Best books list", right after or in parallel with "Software tools".
mysticreddit@reddit
Agreed.
lordnikkon@reddit
it is funny that in this sub people actually want to learn computer science topics and in /r/cscareerquestions i mentioned that it is a good idea for everyone to learn CS fundamentals because i see too many people in industry who know nothing about CS fundamentals and multiple people replied to me that it that stuff is not useful to learn
ml01@reddit
are they also available in .epub? .pdfs are always clunky on my ebook reader.
LexaAstarof@reddit
I am not an AI shill (far from it!), but the Attention Is All You Need paper should be up there
ewankenobi@reddit
Obviously it's a seminal paper in machine learning as it gave us transformers, but I don't think it's a paper that explains the concepts involved very well & ir just reading that paper was all you were going to do to learn about LLMs then I don't think you'd finish it that much the wiser. In fact I'd say in general, academic papers are good for getting a deeper understanding or getting the latest knowledge on a subject you already know about. But if you are starting from scratch, books & youtube videos are better ways to learn the initial concepts
hacksawjim@reddit
It's the first paper mentioned in the article.
LexaAstarof@reddit
Ah, didn't see it. I was skimming through the bloat and only reading the list itself...
motsu35@reddit
not a research paper... i mostly code in python now a-days, but used to be a huge c++ dev (and still do the occasional c thing for embedded). Anyway, i read this a while ago, and it made me go from the mentality of "python is a fine scripting language for things bigger than a bash file / auxiliary services" to "holy crap, python is such a well structured language that is infinitely hackable". I basically send it to anyone mid level dev wise thats also working in python, if they are looking at pushing their knowledge boundaries. (as well as pointing them at system architecture stuff, but thats more specific to what they are working on/at)
https://sadh.life/post/builtins/
Raknarg@reddit
Im a CS grad and I think I would rather shoot myself and splatter my brains all over my terminal than read a paper.
emotionalfescue@reddit
The transaction concept - virtues and limitations (and a bunch of other papers) by Jim Gray. Not only did he do a lot of the early work on the subject with his peers at IBM, but he explained it better than anyone has since, much better than the tech bros on youtube.
Warshrimp@reddit
End-to-end arguments in system design
Warshrimp@reddit
A method for obtaining digital signatures and public-key cryptosystems
Warshrimp@reddit
A mathematical theory of communication
Warshrimp@reddit
Impossibility of distributed consensus with one faulty process
Warshrimp@reddit
Go to considered harmful
Warshrimp@reddit
Reflections on Trusting Trust
curveThroughPoints@reddit
There’s a book that contains the essential ones.
flowering_sun_star@reddit
I think the premise that every developer should read CS papers is a flawed one. The thing is, many of these are academic papers, written for an academic audience. And most developers aren't academics. I know that when it comes to physics, an undergraduate degree doesn't really equip you to properly read papers. You can make a start at trying, and review articles are usually more accessible, but papers are written with the assumption that the reader is someone with a background in the subfield. You develop that over the course of a masters and PhD. Is CS any different? And many developers don't even have an undergraduate CS background.
I know that I, without that CS background, have a great deal of trouble making any sense of Lamport's paper. And what sense I do glean is largely because I've made use of what a colleague called a Lamport Clock, and I can sort of see how you get from the bits I did understand to that real implementation.
On the other hand Waldo et al's note on distributed computing (a subject I know a thing or two about) is all understandable, but quite low quality as a paper IMO. There is a sensible point there - namely that a distributed system behaves in fundamentally different ways than a fully local one, and you need to design it from the beginning to account for those differences. But it's hidden behind an awful lot of waffle!
I will take a look at a couple of the others, but will I gain anything from them or will I simply pull out the bits I recognise from my experience and glaze over the rest? Time will tell.