“IT people” here, operations guy who keeps the lights on for that software.
It’s been my experience developers have no idea how the hardware works, but STRONGLY believe they know more then me.
Devops is also usually more dev than ops, and it shows in the availability numbers.
Yup. Programmers who have only ever been programmers tend to act like god’s gift to this world.
As a QA person I despise those programmers.
“Sure buddy it works on your machine, but your machine isn’t going to be put into the production environment. It’s not working on mine, fix it!”
Thus, Docker was born.
“Works on my machine, ship the machine.”
My answer is usually “I don’t care how well it runs on your windows machine. Our deployments are on Linux”.
I’m a old developer that has done a lot of admin over the years out of necessity.
As a developer I can freely admit that without the operations people the software I develop would not run anywhere but on my laptop.
I know as much about hardware as a cook knows about his stove and the plates the food is served on – more than the average person but waaaay less than the people producing and maintaining them.
As a devops manager that’s been both, it depends on the group. Ideally a devops group has a few former devs and a few former systems guys.
Honestly, the best devops teams have at least one guy that’s a liaison with IT who is primarily a systems guy but reports to both systems and devops. Why?
It gets you priority IT tickets and access while systems trusts him to do it right. He’s like the crux of every good devops team. He’s an IT hire paid for by the devops team budget as an offering in exchange for priority tickets.
But in general, you’re absolutely right.
Am I the only guy that likes doing devops that has both dev and ops experience and insight? What’s with silosing oneself?
I’ve done both, it’s just a rarity to have someone experienced enough in both to be able to cross the lines.
Those are your gems and they’ll stick around as long as you pay them decently.
Hard to find.
Because the problem is that you need
- A developer
- A systems guy
- A social and great personality
The job is hard to hire for because those 3 in combo is rare. Many developers and systems guys have prickly personalities or specialise in their favourite part of it.
Devops spent have the option of prickly personalities because you have to deal with so many people outside your team that are prickly and that you have to sometimes give bad news to….
Eventually they’ll all be mad at you for SOMETHING…… and you have to let it slide. You have to take their anger and not take it personally…. That’s hard for most people, let alone tech workers that grew up idolising Linus torvalds, or Sheldon cooper and their “I’m so smart that I don’t need to be nice” attitudes.
Fantastic summary. For anyone wondering how to get really really valuable in IT, this is a great write-up of why my top paid people are my top paid people.
I’ve always found this weird. I think to be a good software developer it helps to know what’s happening under the hood when you take an action. It certainly helps when you want to optimize memory access for speed etc.
I genuinely do know both sides of the coin. But I do know that the majority of my fellow developers at work most certainly have no clue about how computers work under the hood, or networking for example.
I find it weird because, to be good at software development (and I don’t mean, following what the computer science methodology tells you, I mean having an idea of the best way to translate an idea into a logical solution that can be applied in any programming language, and most importantly how to optimize your solution, for example in terms of memory access etc) requires an understanding of the underlying systems. That if you write software that is sending or receiving network packets it certainly helps to understand how that works, at least to consider the best protocols to use.
But, it is definitely true.
. I think to be a good software developer it helps to know what’s happening under the hood when you take an action.
There’s so many layers of abstractions that it becomes impossible to know everything.
Years ago, I dedicated a lot of time understanding how bytes travel from a server into your router into your computer. Very low-level mastery.
That education is now trivia, because cloud servers, cloudflare, region points, edge-servers, company firewalls… All other barriers that add more and more layers of complexity that I don’t have direct access to but can affect the applications I build. And it continues to grow.
Add this to the pile of updates to computer languages, new design patterns to learn, operating system and environment updates…
This is why engineers live alone on a farm after they burn out.
It’s not feasible to understand everything under the hood anymore. What’s under the hood grows faster than you can pick it up.
I’d agree that there’s a lot more abstraction involved today. But, my main point isn’t that people should know everything. But knowing the base understanding of how perhaps even a basic microcontroller works would be helpful.
Where I work, people often come to me with weird problems, and the way I solve them is usually based in low level understanding of what’s really happening when the code runs.
One may also end up developing in the areas that the above post considers inaccessible where their knowledge is likely still required.
yeah i wish it was a requirement that you’re nerdy enough to build your own computer or at least be able to install an OS before joining SWE industry. the non-nerds are too political and can’t figure out basic shit.
This is like saying before you can be a writer, you need to understand latin and the history of language.
Before you can be a writer, you need to sharpen your own pencil.
I like informing yourself about the note taking app you’re writing with a little more. It makes it a bit more obvious that it’s kind of obvious but can have many advantages.
Personally though I don’t really see upside of building a computer as you could also just research things and not build it or vice versa. (Maybe it’s good for looking at bug reports?)
A 30 minute explanation on how CPUs work that I recently got to listen in on was likely more impactful on my C/assembly programming than building my own computer was.
you wouldn’t want somebody that hates animals to become a veterinarian just because of money-lust. the animals would suffer, the field as a whole, too. maybe they start buying up veterinary offices and squeeze the business for everything they can, resulting in worse outcomes- more animals dying and suffering, workers get shorted on benefits and pay.
people chasing money ruin things. we want an industry full of people that want to actually build things.
I don’t really see the connection to my comment.
In this example wouldn’t the programmer be more of a pharmacist? (The animal body the computer and its brain the user?)
Your statement is not wrong, it just seems unrelated.
You should if you want to be a science writer or academic, which lets be honest is a better comparison here. If your job involves latin for names and descriptions then you probably should take at least a year or two of latin if you don’t want to make mistakes here and there out of ignorance.
weird, i studied latin and the history of language just because i found it interesting. i am always seeking to improve my writing skills tho.
I think software was a lot easier to visualise in the past when we had fewer resources.
Stuff like memory becomes almost meaningless when you never really have to worry about it. 64,000 bytes was an amount that made sense to people. You could imagine chunks of it. 64 billion bytes is a nonsense number that people can’t even imagine.
When I was talking about memory, I was more thinking about how it is accessed. For example, exactly what actions are atomic, and what are not on a given architecture, these can cause unexpected interactions during multi-core work depending on byte alignment for example. Also considering how to make the most of your CPU cache. These kind of things.
and I don’t mean, following what the computer science methodology tells you, I mean having an idea of the best way to translate an idea into a logical solution that can be applied in any programming language,
But that IS computer science.
It very much depends on how close to hardware they are.
If someone is working with C# or JavaScript, they are about as knowledgeable with hardware as someone working in Excel(I know this statement is tantamount to treason but as far as hardware is concerned it’s true
But if you are working with C or rust or god forbid drivers. You probably know more than the average IT professional you might even have helped correct hardware issues.
Long story short it depends.
Absolutely agree, as a developer.
The devops team set up a pretty effective setup for our devops pipeline that allows us to scale infinity. Which would be great if we had infinite resources.
We’re hitting situations where the solution is to throw more hardware at it.
And IT cannot provision tech fast enough within budget for any of this. So devs are absolutely suffering right now.
Maybe you should consider a bigger budget?
I know this is not everyone and there’re some unicorns out there but after working with hiring managers for decades i can’t help but see cheap programmers when I see Devops. It’s ether Ops people that think they are programmers or programmers that are not good enough to get hired as Software Engineers outright at higher pay. It’s like when one person is both they can do both but not great at ether one. Devops works best when it’s a team of both dev and Ops working together. IMO
20 year IT guy and I second this. Developers tend to be more troublesome than the manager wanting a shared folder for their team.
Rough and that sucks for your organization.
Our IT team would rather sit in a room with developers and solve those problems, than deal with hundreds of non-techs who struggle to add a chrome extension or make their printer icon show up.
I would love to work through issues but the stock of developers we currently have seem to either the rejects or have as someone else stated “a god complex”. They remind me of pilots in the military. All in all it is a loss for my organization.
I work on a team with mainly infrastructure and operations. As one of the only people writing code on the team. I have to appreciate what IT support does to keep everything moving. I don’t know why so many programmers have to get a chip on their shoulder.
As a developer I like to mess with everything. Currently we are doing an infrastructure migration and I had to do a lot of non-development stuff to make it happen.
Honesly I find it really usefull (but not necessary) to have some understanding of the underying processes of the code I’m working with.
Could you give an example of something related to hardware that most developers don’t know about?
Simple example, our NASes are EMC2. The devs over at the company that does the software say they’re garbage, we should change them.
Mind you, these things have been running for 10 years straight 24/7, under load most of the time, and we’ve only swapped like 2 drives, total… but no, they’re garbage 🤦…
Accurate!
Developers are frequently excited by the next hot thing or how some billionaire tech companies operate.
I’m guilty of seeing something that was last updated in 2019 and having a look of disgust.
Well, at least you admit it, not everyone does.
I do agree that they’re out of date, but that wasn’t their point, their software somehow doesn’t like the NASes, so they had to look into where the problem was. But, their first thought was “let’s tell them they’re no good and tell them which ones to buy so we wouldn’t have to look at the code”.
That sounds extremely lazy. I’d expect more from a dev team.
Me too, but apparently, that wasn’t the case.
My reasoning was, they’d have to send someone over to do tests and build the project on site, install and test, since we couldn’t give any of those NASes to them for them to work on the problem, and they’d rather not do that, since it’s a lot more work and it’s time consuming.
Couldn’t they remotely connect to them?
If it’s publicly accessible it likely has a bunch of vulnerabilities so I too understand that look.
Sorry, this comment is causing me mental whiplash so I am either ignorant, am subject to non-standard circumstances, or both.
My personal experience is that developers (the decent ones at least) know hardware better than IT people. But maybe we mean different things by “hardware”?
You see, I work as a game dev so a good chunk of the technical part of my job is thinking about things like memory layout, cache locality, memory access patterns, branch predictor behavior, cache lines, false sharing, and so on and so forth. I know very little about hardware, and yet all of the above are things I need to keep in mind and consider and know to at least some usable extent to do my job.
While IT are mostly concerned on how to keep the idiots from shooting the company in the foot, by having to roll out software that allows them to diagnose, reset, install or uninstall things on, etc, to entire fleets of computers at once. It also just so happens that this software is often buggy and uses 99% of your cpu taking it for spin loops (they had to roll that back of course) or the antivirus rules don’t apply on your system for whatever reason causing the antivirus to scan all the object files generated by the compiler even if they are generated in a whitelisted directory, causing a rebuild to take an hour rather than 10 minutes.
They are also the ones that force me to change my (already unique and internal) password every few months for “security”.
So yeah, when you say that developers often have no idea how the hardware works, the chief questions that come to mind are
- What kinda dev doesn’t know how hardware works to at least an usable extent?
- What kinda hardware are we talking about?
- What kinda hardware would an IT person need to know about? Network gear?
When IT folks say devs don’t know about hardware, they’re usually talking about the forest-level overview in my experience. Stuff like how the software being developed integrates into an existing environment and how to optimize code to fit within the bounds of reality–it may be practical to dump a database directly into memory when it’s a 500 MB testing dataset on your local workstation, but it’s insane to do that with a 500+ GB database in production environment. Similarly, a program may run fine when it’s using a NVMe SSD, but lots of environments even today still depend on arrays of traditional electromechanical hard drives because they offer the most capacity per dollar, and aren’t as prone to suddenly tombstoning when it dies like flash media. Suddenly, once the program is in production, it turns out that same program’s making a bunch of random I/O calls that could be optimized into a more sequential request or batched together into a single transaction, and now it runs like dogshit and drags down every other VM, container, or service sharing that array with it. That’s not accounting for the real dumb shit I’ve read about, like “dev hard coded their local IP address and it breaks in production because of NAT” or “program crashes because it doesn’t account for network latency.”
Game dev is unique because you’re explicitly targeting a single known platform (for consoles) or targeting for an extremely wide range of performance specs (for PC), and hitting an acceptable level of performance pre-release is (somewhat) mandatory, so this kind of mindfulness is drilled into devs much more heavily than business software dev is, especially in-house dev. Business development is almost entirely focused on “does it run without failing catastrophically” and almost everything else–performance, security, cleanliness, resource optimization–is given bare lip service at best.
Thank you for the explanation, now I understand the context on the original message. It’s definitely an entirely different environment, especially the kind of software that runs on a bunch of servers.
I have built business programs before being a game dev, still the kinds that runs on device rather than on a server. Even then, I always strived to write the most correct and performant code. Of course, I still wrote bugs like that time that a release broke the app for a subset of users because one of the database migrations didn’t apply to some real-world use case. Unfortunately, that one was due to us not having access to real world databases pr good enough surrogates due to customer policy (we were writing an unification software of sorts, up until this project every customer could give different meanings to each database column as they were just freeform text fields. Some customers even changed the schema). The migrations ran perfectly on each one of the test databases that we did have access to, but even then I did the obvious: roll the release back, add another test database that replicated the failing real world use case, fixed the failing migrations, and re released.
So yeah, from your post it sounds that either the company is bad at hiring, bad at teaching new hires, or simply has the culture of “lol who cares someone else will fix it”. You should probably talk to management. It probably won’t do anything in the majority of cases, but it’s the only way change can actually happen.
Try to schedule one on one session with your manager every 2 to 3 weeks to assess which systematic errors in the company are causing issues. 30 minutes sessions, just to make them aware of which parts of the company need fixing.
Game development is a very specific use case, and NOT what most people think of when talking about devs vs ops.
I’m talking enterprise software and SaaS companies, which would be a MUCH larger part of the tech industry then games.
There are a large number of devs who think public cloud as infrastructure is ALWAYS the right choice for cost and availability for example… Which in my experience is actually backwards, because legacy software and bad developers fail to understand the limitations of this platforms, that it’s untrustworthy by design, and outages insue.
In these scenarios understanding how the code interacts with actual hardware (network, server and storage or their IaaS counterparts) is like black magic to most devs… They don’t get why their designs are going to fall over and sink into the swamp because of their nievete. It works fine on their laptop, but when you deploy to prod and let customer traffic in it becomes a smoking hole.
Apologies for the tangent:
I know we’re just having fun, but in the future consider adding the word “some” to statements about groups. It’s just one word, but it adds a lot of nuance and doesn’t make the joke less funny.
That 90’s brand of humor of “X group does Y” has led many in my generation to think in absolutes and to get polarized as a result. I’d really appreciate your help to work against that for future generations.
Totally optional. Thank you
In my experience a lot of IT people are unaware of anything outside of their bubble. It’s a big problem in a lot of technical industries with people who went to school to learn a trade.
The biggest problem with the bubble that IT insulates themselves into is that if you don’t users will never submit tickets and will keep coming to you personally, then get mad when their high priority concern sits for a few days because you were out of office but the rest of the team got no tickets because the user decided they were better than a ticket.
If people only know how to summon you through the ancient ritual of ticket opening with sufficient information they’ll follow that ritual religiously to summon you when needed. If they know “oh just hit up Rob on teams, he’ll get you sorted” the mysticism is ruined and order is lost
Honestly I say this all partially jokingly. We do try to insulate ourselves because we know some users will try to bypass tickets if given any opportunity to do so, but there is very much value in balancing that need with accessability and visibility. So the safe option is to hide in your basement office and avoid mingling, but thats also the option that limits your ability to improve yourself and your organization
I meant knowledge wise. Many people in technical industries lack the ability to diagnose issues because they don’t have a true understanding of what they actually do. They follow diagnostic trees or subscribe to what I call “the rain dance” method where they know something fixes a problem but they didn’t really know why. If you mention anything outside of their small reality they will refuse to acknowledge it’s existence.
More like:
“IT people when software people talk about their requirements”No, we won’t whitelist your entire program folder in Endpoint Protection.
Yep, unrealistic expectations.
Or “you need a 12th gen i7 to run this thing”… the thing is a glorified Avidemux.
Christ, if you could see the abysmal efficiency of business tier SQL code being churned out in the Lowest Bidder mines overseas…
Using a few terrabytes of memory and a stack of processors as high as my knee so they can recreate Excel in a badly rendered .aspx page built in 2003.
We have a table with literally three columns. One is an id, another a filename and a third a path. Guess which one was picked as the primary key?
Never seen something so stupid in 28 years of computing. Including my studies.
Well, you don’t want to waste space by adding the same file path twice
As a dev, I had to fix an O( n! ) algorithm once because the outsourced developer that wrote it had no clue about anything. This algorithm was making database queries. To an on-device database, granted, so no network requests, but jesus christ man. I questioned the sanity of the world that time, and haven’t stopped since.
Oh yeah, I love people who stick SQL lookups in a For Loop. Even better, the coder who puts conditional if (but no then/else) clauses around a dozen raw text execution commands that fire in sequence. So you’re making six distinct lookups per iteration rather than answering your question in a single query and referencing the results from memory.
Internal screaming
No, we can’t get gigabit fiber everywhere. No, I don’t care if your program needs it. Yes, the laws of physics are laws for a reason. Write more robust code.
Write more robust code.
Sure, I could read a book about best practices and Big O…but…What if we just table the idea for a few iterations of Moore’s Law instead?
and Big O
It’s asymptotic. Slower O doesn’t mean faster program.
When I say Big O, I’m talking about the slick jazzy anime about rejecting true love and living with heartbreak because we believe a lie about our own superiority. This is always true, no matter what the discussion context. If I happen to say anything remotely relevant to mathematical Big O, that is just a deeply weird coincidence.
Oh! You meant Moore’s Law is asymptotic!?
Yes! That is key to the joke I was making.
Gigabit fiber? You’re in some posh spot but needs to downgrade for some reason, right?
As a software person i have to protest at being called out like this. It’s the fucking weekend man…stop picking on me for just one damn day.
Ouch yeah that windows endpoint stuff is really rattling though. I get you just can’t whitelist some folder without compromising security, but when the “eNdPoInt pRoTeCtIon” just removes dlls and exes you are compiling (and makes your PC crawl) you really hate that shit.
Right click? 40 seconds plz (maybe any of the possible contextual right clicks might be on a virus so lets just check them all once again).
At home I have an old linux pc, and it blows those corpo super pcs out the window.
Rant off :-D
Ah yeah, IT people are chill, always be cool with them is also a good idea, not their fault all this crap exists.
Hahaha! We’ve an “architect” who insists he needs to be the owner on the gitlab. My colleague has been telling him to fuck off for the entire week. It reached the point that fool actually complained to our common boss… The guy is so used to working as a start-up and has no fucking clue about proper procedures. It’s terrifying that he could be in charge of anything, really.
I started getting messages every week from a carbon black scan blocking access to some npm’s package.json.
IT just white listed files named package.json.
In a rapidly churning startup phase, where new releases can and do come out constantly to meet production requirements, this one size fits all mentality is impractical.
If you refuse to whitelist the deployment directory, you will be taking 2am calls to whitelist the emergency releases.
No it can’t wait until Monday at 9am, no there will not be a staged roll out and multiple rounds of testing.
I am more than willing to have a chat; you, me and the CEO.
No it can’t wait until Monday at 9am, no there will not be a staged roll out and multiple rounds of testing.
I hope you’re doing internal product development. Otherwise, name and shame so I can stay the hell away from your product. This is a post-Crowdstrike world.
It IS bespoke internal development, not for deployment outside of the facility.
The computers running the software exist only to run this software and have no business talking to the internet at all.
IT is provided by an external third party vendor who operate on an inflexible “best practices dogma”.Sounds like you’re stuck in a worst practices mindset.
Sign your damn releases and have the whitelisting done by cert.
Sounds like you’re stuck in a worst practices mindset.
Worst/Pragmatic.
If I get a timeline for a feature request, then everything can be scheduled, tested, whitelisted, delivered at a reasonable time.
That’s the rarer event - normally it’s more like “the scale head has died and a technician is on the way to replace it” and whilst I modify the program in question to handle this new input, hundreds of staff are standing around and delivery quotas won’t be met.
Is my position arrogant? This is the job.Sign your damn releases and have the whitelisting done by cert.
I’ll see if this is possible at the site in question, thank you.
In my experience it’s been IT people telling me you can’t use a certain tool or have more control over your computer cause of their rules.
The expression is appropriate but the meme assumes that im doubting the IT person’s expertise. I’m not, I’m just not liking the rules that get in the way of my work. Some rules do make sense though.
Edit: just wanted to point out, yes I agree, you need the rules, they are still annoying tho.
“Their rules” are basic security precautions
Their rules have stopped me from being able to do my job. Like the time the AV software quarantined executables as I was creating them so I literally could not run my code. When security enforcement prevents me from working, something needs to change.
As an IT guy, I’d love to give software devs full admin rights to their computer to troubleshoot and install anything as they see fit, it would save me a lot of time out of my day. But I can’t trust everyone in the organization not to click suspicious links or open obvious phishing emails that invite ransomware into the organization that can sink a company overnight.
Fair points but as someone who works in cybersecurity. Phishing emails can happen without admin access. I haven’t heard of any randsomware that is triggered by just clicking on a link.
I think there should be some restrictions but highly technical people should slowly be given more and more control as they gain more trust/experience.
Of course but the impact could be much worse if the victim is admin on their computer.
Exactly this. we try to prevent cyberattacks as much as we can, but at a certain point, they’re impossible to perfectly defend against without also totally locking down our users and making it impossible for them to do their jobs. so then the game becomes one of containing the amount of damage an attack can do.
Security is restriction. our job is to balance our users’ ability to perform their jobs with acceptable levels of risk.
Not a security guy but I heard there’s a whole term for it, “one-click attacks”
This is why we only hire competent engineers.
And the more corporate the organisation the more rules, at least the places I have worked trusts developers enough to give local admin, that takes the edge off many tasks.
I think you probably don’t realise you hate standards and certifications. No IT person wants yet another system generating more calls and complexity. but here is iso, or a cyber insurance policy, or NIST, or acsc asking minimums with checklists and a cyber review answering them with controls.
Crazy that there’s so little understanding about why it’s there, that you just think it’s the “IT guy” wanting those.
I thought my comment was pretty clear that some rules are justified and that the IT person can just be the bearer of bad news.
Maybe not, hopefully this comment clarifies.
So you don’t trust me, but you trust McAfee to give it full control over the system. Yet my software doesn’t work because something is blocked and nothing is showing up in the logs. But when we take off Mafee, it works. So clearly McAfee is not logging everything. And you trust Mcafee but not me? /s kinda.
No one on earth trusts McAfee, be it the abysmal man or abysmal AV suite.
If the EDR or AV software is causing issues with your code running, it’s possibly an issue with the suite, but it’s more likely an issue with your code not following common sense security requirements like code signing.
you don’t code sign during development…
It’s not common, but it should be.
Still, that was just one example. EDR reacting to your code is likely a sign of some other shortcut being taken during the development process. It might even be a reasonable one, but if so it needs to be discussed and accounted for with the IT security team.
You’re talking about during CI. Not during the actual coding process. You’re not signing code while you’re debugging.
I worked in software certification under Common Criteria, and while I do know that it creates a lot of work, there were cases where security has been improved measurably - in the hardware department, it even happened that a developer / manufacturer had a breach that affected almost the whole company really badly (design files etc stolen by a probably state sponsored attacker), but not the CC certified part because the attackers used a vector of attack that was caught there and rectified.
It seemingly was not fixed everywhere for whatever reason… but it’s not that CC certification is just some academic exercise that gives you nothing but a lot of work.
Is it the right approach for every product? Probably not because of the huge overhead power certified version. But for important pillars of a security model, it makes sense in my opinion.
Though it needs to be said that the scheme under which I certified is very thorough and strict, so YMMV.
I think the meme is more about perspectives and listening to the way someone thinks about operating IT is very different from the way someone things about architecting IT
I think it’s on a case by case basis but having help desk ppl help you out and opening powershell and noodling without any concept of problem solving made me make this face once.
It probably goes both ways, I’m a dev and I assembled computers at 12 yo so I believe I have a lot of experience and knowledge when it comes to hardware. I’ve also written code for embedded platforms.
IT people in my pov can really come across as enthusiast consumers when it comes to their hardware knowledge.
“did you guys hear Nvidia has the new [marketing term] wow!” . Have you ever thought about what [marketing term] actually does past just reading the marketing announcement?
At the same time I swear to God devs who use macs have no idea how computers work at all and I mean EXCLUDING their skill as a dev. I’ve had them screen share to see what I imagine is a baby’s first day on a computer.
To close this rant: probably goes both ways
Interesting comment on the Mac. At my workplace we can choose between Mac or Windows (no Linux option unfortunately, my personal computer runs Debian). Pretty much all the principle and senior devs go for Mac, install vim, and live in the command line, and I do the same. All the windows people seem over reliant on VSCode, AI apps, and a bunch of other apps Unix people just have cli aliases for and vim shortcuts. I had to get a loaner laptop from work for a week and it was windows. Tried using powershell and installing some other CLI tools and after the first day just shut the laptop and didn’t work until I got back from travel and started using my Mac again.
If you don’t have access to Linux, MacOS is the closest commercially available option so it makes sense.
Also please take what I said lightly, I by no means want to bash Mac users and generalize them. It just has been my experience. I’m sure there are thousands of highly competent technical users who prefer Mac.
Why wasn’t wsl an option?
WSL is interesting because it manages to simultaneously offer everything a Linux user would want while also actually capable of none of what a Linux user would need it to do. Weird compatibility issues, annoying filesystem mappings that make file manipulation a pain, etc
In a Windows environment I’ve found it honestly works better to either ssh into a Linux machine or learn the PowerShell way of doing it than to work through WSL’s quirks
Windows is an option
I have to use windows for work. Installed vim through winget and set a powershell alias, allowing me to use it similarly to linux. Windows ist still just ass though.
Lmao, devs who insist on using VIM and the terminal over better graphical alternatives just to seem hardcore are the worst devs who write the worst code.
“Let me name all my variables with a single letter and abbreviations cause I can’t be bothered to learn how to setup a professional dev environment with intellisense and autocomplete.”
Spoken like someone who knows absolutely nothing about vim/unix.
I know it has a steep learning curve with no benefit over GUI alternatives (unless you have to operate in a GUI-less environment).
Which makes it flat out dumb for a professional developer to use. “Lets make our dev environment needlessly difficult, slowing down new hires for no reason will surely pay off in the long run”.
This is it – the dumbest shit I’ll read today, and I was just dunking on Nazis.
I can run Neovim on my phone via Termux. I can run Neovim over SSH. I can run Neovim in tmux. That’s not possible with VSCode.
I prefer Micro via Terminus
Bruh, there’s a whole help page on that:
I have serve-web running as a service, but that only works well on desktop screen layouts — from my experience, it runs terribly on mobile. However, even then, my tab layout isn’t synced between devices. My tmux saves all of my open projects, so I could throw my phone in a woodchipper at any moment, pull out my laptop, and be exactly where I left off. Good luck doing that with vscode.
no benefit over GUI alternatives
Lol nice bait
Or maybe…hear me out…different people like different things. Some people don’t like GUIs and enjoy working in the command line. For some other people, it’s the opposite.
It’s just different preferences.
You are making prejudiced, generalized, assumptions and presenting them as facts.
You are at best naive if you think people use vim and a terminal instead of “better graphical alternatives” (which there are none of if you’ve really gotten into vim/emacs/whatever). And we don’t do it to seem hardcore (maybe we are, but that’s a side effect). Software in the terminal is often more simple to use, because it allows chaining together outputs and has often simpler user interfaces.
The second paragraph is word salad. Developers should name their shit properly regardless of editor and it’s quite simple to have a professional dev setup with ‘intellisense’ and auto complete in neovim. In fact, vim/neovim and I assume emacs too have much more features and flexibility of which users of IDEs or vscode wouldn’t so much as think of.
I assume your prejudice comes from the fact that vim is not a “one size fits all no configuration needed” integrated development environment (IDE) but rather enables the user to personalize it completely to their own wishes, a Personalized Development Environment. In that regard, using one of the “better graphical tools” is like a mass produced suit while vim is like a tailor made one.
Just let people use what they like. Diversity is a strength.
You know neovim can use the exact same LSPs (Language Server Protocol) for intellisense as VS Code right? There’s intellisense, git integration, code-aware navigation, etc. Neovim can be everything VS code is (they’re both just text editors with plugins), except Neovim can be configured down to each navigation key so it’s possible to be way more efficient in Neovim. It’s also faster and more memory
edficientefficient because it isn’t a text editor built on top of a whole browser engine like VS Code is.I use a Neovim setup at home (I haven’t figured out how to use debugger plugins with Neovim and the backend I work on is big enough that print debugging endpoints would drive me insane) and I can assure you I have never given variable names one letter unless I’m dealing with coordinates (x, y, z) or loops (i, j) and usually in the latter scenario I’ll rename the variable to something that makes more sense. Also, we don’t do it to seem hardcore, it’s because there are actual developer efficiency benefits to it like the ones I listed above.
By your own logic you “can’t be bothered” to learn how to edit a single config file on a text editor that has existed in some form for almost 50 years (vi). Stop making strawman arguments.
I tried using VScode to play around with Golang. I had to quit coding to take care of something else. I hit save, and suddenly I have way fewer lines of code. WTF? Why did/would saving delete code? After much digging, it turns out because the all knowing VSCode thought because I had not yet referenced my variables, I never would, and since my code I wanted to save and continue later wouldn’t compile, it must be quelled. Off with its head!
Anyway, I decided to use vim instead. When I did :wq, the file was saved exactly as I had typed it.
This is either false, or you didn’t understand the environment you were working in.
You have to explicitly turn on the setting to have VSCode reformat on save, it’s not on by default, and when it is on, it’s there for a reason, because having software developers that do not all follow the same standard for code formatting creates unpredictable needless chaos on git merge. This is literally ‘working as a software developer on a team 101’.
Agreed. I have colleagues that I write scripts for (I don’t do that any more, I stopped and shit stopped working, so they solve things manually now), they don’t know shit about scripting… and still don’t.
On the other hand, I’ve had the pleasure of working with a dev that was just this very positive, very friendly person and was also very knowledgeable when it came to hardware, so we were on the same page most of the time. He also accepted most of my code changes and the ones that he didn’t, gave him an idea of how to solve it more efficiently. We were a great team to be honest. We’re still friends. Don’t see him as frequently, but we keep in touch.
devs who use macs
Do they exist? Are you sure they are devs?
Our entire .NET shop swapped to MacBook Pros from Dell Precisions for like 2-3 years because our head of development liked them more. Then went back to having a choice after that. So now we have a mix. In all honesty it’s not much different for me but I use everything…Windows, Mac, Linux. Whatever works best for me for the task at hand. DotNet runs on all three so we kind of mix and match. Deploying to Azure allows a mix of windows/linux and utilizing GitHub Actions allows a mix of windows/linux in the same workflows as well. So it’s best to just learn them all. None of them are perfect and have pros/cons.
I dabble in hardware and networking too. I built my first computer when I was 11 by myself. My parents are kind of tech illiterate. I have fiber switches and dual Xeon servers and the such in my house. My NAS is a 36 hot swap bay 4U server. That knowledge definitely helps when deploying to the cloud where you’re responsible for basically everything.
Also, yes. I can do more than .Net languages…that’s where my job currently falls though.
I’m writing python in PyCharm on my M2 right now.
MacOS is literally certified UNIX though.
I’m not a Mac user at all, and I’m lucky enough to be able to run Linux full time at work, but it seems like macs should be alright in many cases.
They do exist and some of them swear Mac has better workflows (than windows because most of the time your options are Windows or Mac). I would call them loonies but I’ve seen some smart people use Macs.
I’m both IT and development…and I’ve caught both sides being utterly wrong because they’re only familiar with one and not the other
Traitor.
We have a daywalker amongst us…
I don’t get it. And I’ve been both.
Is it about how some software shouldn’t need the resources that they demand for?
I’d say… elitism
More likely tribalism.
Por que no los dos?
Because you can’t have elitism in the group that knows so little about fixing something that one of their actual plans of action is to reboot and pray
Meh it’s usually for shitty companies that expect their devs to write real software, ssh into things, access databases, but put the same hurdles in front of them as joeblow from sales who can’t use an ipad to buy a sandwich without clicking a phishing link. So every new project is slowed down cause it takes weeks of emails and teams conversations to get a damn db sandbox and it’s annoying.
On the other hand IT doesn’t know you and has millions of issues to attend to
IT guy here. If we give one user special rights, that login will get passed around like a blunt at a festival to “save time”.
Users are dumb and lazy, and that includes devs.It’s not special rights, it’s project materials approved by leadership, and noted on a published and approved feature roadmap
Edit assuming requisitioning a scaled db replica is “special” is kinda aligned with the meme lol
Users are dumb and lazy
Funny, that has actually been my entire experience with corporate IT. This field attracts the type of firemen that won’t climb down the pole because it’s a safety hazard. Y’all are… something special.
I took it as software engineers tend to build for scalability. And yep, IT often isn’t prepared for that or sees it as wasted resources.
Which isn’t a bad thing. IT isnt seeing the demands the manager/customer wants.
I’m glad you’ve done both because yeah, it’s a seesaw.
If IT provisions just enough hardware, we’ll hit bottlenecks and crashes when there’s a surprise influx of customers. If software teams don’t build for scale, same scenario, but worse.
From the engineer perspective, it’s always better to scale with physical hardware. Where IT is screaming, “We dont have the funds!”
This is exactly my face when IT is telling me the rules for my passwords.
Sorry, those rules come from our cybersecurity insurance, or some compliance rules.
We hate them as much as you do.Then why are they different between systems? Do you have different insurers per application?
Those other applications come from an external vendor, we only provide the VM to run them.
We hate those even more than you do.You can’t
Every single issue that occurs with those applications gets thrown in our laps to fix.
This includes all of yours as well as all your colleagues.
See I think this is where in general people in it misunderstand the impact.
Like, if it’s -40 and your furnace breaks, who is having the worse day, you or the furnace repair man?
The repair man might be grumbling because they have to do their job, but you’re grumbling because you’re freezing. You both might be grumbling, but by way of impact there is a massive asymmetry in impact.
What applications do you have that IT controls the password requirements for?
IT controls your AD credential requirements in most cases and that’s pretty much it. It sounds like your employer needs to implement an SSO solution.
It is the AD credentials. It’s a fortune 500 company and it doesn’t even come close to NIST recommendations.
We have like 3 different ADs as a result of mergers and acquisitions, and the requirements are all different.
Oh…
Well you’re fucked then
What are the requirements?
One of them is EXACTLY 8 ASCII characters, may not contain any English dictionary word, no repeating character. At least 1 number, and at least 1 special characters. Just obliterates the search space.
Containerise everything!
Exactly what a dev would say… you guys don’t have to deal with that 3rd gen i3 Jenny from accounting is running.
Ticket opened I need soda intalled high importance!!! get up there companies paying for Adobe suite it’s there on the desktop…
Yes! Containerize, containerize, containerize until every perfectly good machine built before 2020 is rotting away in a landfill!
No we put those in a cluster or take them home for fun projects :D
I’m IT and my cousin is software. I had to teach him basic computer maintenance…
I spent a weekend helping my buddy who graduated magna cum laude with an Electrical and Computer Engineering degree build a PC. Given a breadboard and some schematics, he could probably have created working prototypes of half of the components, but figuring out where to put the screw risers under the motherboard? Forget about it.
That’s wild
RTFM? Nowadays it’s a YouTube video away. Freaking legos.
I’m almost done with my CS degree, I started learning programming at age 10, low-level software development like drivers and embedded really interests me and that’s the direction I want to go in for a career, but I had to ask my friend who was studying with me to help me build my PC. Hardware just scares me. I’m worried ill bork something :3
the User vs Builder relationship has been a bit tense for thousands of years
“Have you tried unplugging your Pyramid and plugging it back in?”
That’s how I look at 90% of the shit “systems” I’m forced to interact with (xiaomi’s MIUI, banking apps, govt apps, apps that should’ve been fucking websites, websites that “gently nudge” you to use the app, electron apps that are windows only)
MiUI is not that bad IMO. The ad services and the integrated apps are horrible (even without the ads), but apart from that, the UI is fairly usable. They really haven’t changed that much from what Android comes with by default.
This entire thread is giving me impostor syndrome
It just makes me realize how much I hate what I do for a living.
I definitely have moments like this too. I have been reflecting more lately and trying to decide if the feeling is temporary or permanent. I have been pondering what else I would do. Are you considering a career change, and if so, what would you do instead? I don’t know if I could transition to something else without going back to school, and it would kill me a bit inside to take out more student loans.
That’s the conversation I was having with my therapist this week. I don’t know. I’ve always massively struggled with this. Thinking about it sends me into a spiral.
As of now the plan is to look for other opportunities in industry. Some training is fine but I would like to avoid loans. I don’t have anything specific yet, but public sector is likely part of it. I’m less motivated to help people as I am to make certain people miserable. Countries have started to track job quality (“job quality”), it’s data worth looking at.
Depending on how that goes I have other thoughts but nothing that is sucking me in. Maybe I’ll give up entirely and become a vagrant. I also have a viable non-expiring business idea that would de-employ a certain group of people I don’t like. I’m not ready for either of those yet.
In the meantime I have a bucket list of things that I’m working through. It helps me feel like my life has forward momentum despite what’s happening with my career (it’s also opening up new doors I didn’t see before, eg acting). Between that and therapy my job feels often feels like something I’ll deal with later.
All devs turn 40 and quit their job, buy a cottage near the forest and start growing their own vegetables anyway, so you just need to stick to it for a few more years.
What has been working for me is not trying to make software my life or my identity. I don’t get home from work just to work on my side project, or my app, or my Arch install, or even watch videos about coding and shit. I hang out at my pond, play with my pets, play with my son, chill with my wife, work on the yard, or just watch/play something that catches my interest.
It’s like we all have a unique user’s manual for our unique bodies and minds, but we don’t get a copy of it and have to do some reverse engineering to figure out what works. Then you have to have the compassion and empathy for yourself to do the things that increase your happiness instead of doing the things that you’re “supposed” to do.
That’s solid advice. I think I have my identity wrapped up too much in my career, so when I dislike my job, I feel unsatisfied in life. I will try to see it as means to an end more than who I am.
Awesome to hear! It’s easier said than done (like always) because I think sometimes we don’t even realize when we’re doing it.
In the first year of COVID my position got eliminated at the company I’d worked at for 16 years. I’d had different positions within the company, but that place was basically my entire career until then.
That shock to the system, coupled with the fact that several months later I realized I was the same person with the same loved ones, finally flipped some switch in my brain that I didn’t even realize was there. Then the next job I got was fucking horrible and served to weld that switch in its new position, lol.
So now I have a good job with good coworkers, and I appreciate that fact every day, but that’s not going to erode the healthy boundaries and mental compartmentalization.
Bruh I’m a software architect but I don’t know how to code competently in any language.
I love unix shops.
🫶
When trying to request a firewall change IT told me “ports between 1 and 1024 are reserved and can’t be used for anything else” so I couldn’t be using it for a pure TCP connection, and besides, there would have to be a protocol on top of TCP, just TCP as protocol is obviously wrong. I was using port 20 because it was already open…
as a full stack dev, everything you said has offended me.
port 20 is used for FTP, unless you were using FTP, then go right ahead. Guessing that since you didn’t know the protocol you were not using FTP.
port usage reservations are incredibly important to ensure that the system is running within spec and secure. imagine each interface like a party telephone line and the ports are time slots.
your neighborhood has reserved specific times (ports) for everyone to call their relatives. if you use the phone not in your slot (port) your neighbors might get pissed off enough to interrupt your slot. and then it’s just chaos from there.
As IT/network/security, using a well known port for something that’s not what is supposed to run on that port, is inviting all kinds of problems.
Especially the very well known ones, like ftp, ssh, SMTP, http, HTTPS, etc (to name a few). People make it their mission to find and exploit open FTP systems. I opened up FTP on a system once to the internet as kind of a honeypot, and within a week or so, there was someone uploading data to it.
No bueno. Don’t use well known ports for things unless the thing that well known port is known for, is what you want to do.
All of that is fine, and they mentioned the management perspective, which I get. It was a field test and our original choice of 4001 - which is what other serial to TCP servers like us use, also in their network - was unavailable.
What irks me is the “technical impossibility” of raw TCP and “I must be wrong” when filling out their firewall change form.
They’ve since given us a different port “close to others that we use”, for whatever reason that matters, and based their choice on some list of common protocols outside the reserved range. But not 4001.
That by itself is just one thing and I wouldn’t give it a second thought, but it’s all part of a larger picture of ineptitude. They opened a ticket because an arrow at the border of our UI vanished when they screen shared on Teams. Because of the red border. And they blamed our application for it.
They didn’t set up their PKI correctly and opening our webpage on specific hosts gave the typical “go back” warning. But it was our fault somehow, even though the certificate was the one they supplied us and it was valid.
What irks me is the “technical impossibility” of raw TCP and “I must be wrong” when filling out their firewall change form.
Most commonly a port is opened to accept traffic of a specific protocol that runs overtop of TCP of UDP. I’m guessing the individual that responded might not be very good at technical communication and was just trying to question “are you sure it’s raw TCP and not just http traffic?” In order to keep the holes poked into the firewall as narrow and specific as possible
They’ve since given us a different port “close to others that we use”, for whatever reason that matters, and based their choice on some list of common protocols outside the reserved range. But not 4001.
Usually if infrastructure is assigning a port other than default it’s because that port is already in use. The actual port number you use doesn’t matter as long as it’s not a common default (which basically all ports below 1024 are)
Using ports that are close together for similar purposes can aid in memorability if that’s a need, but ultimately it doesn’t matter much if they’re not conflicting with common defaults
They opened a ticket because an arrow at the border of our UI vanished when they screen shared on Teams. Because of the red border. And they blamed our application for it.
Probably a user was complaining and needed action immediately and they didn’t have time to test a cosmetic issue in an edgecase. For minor issues I’ll open a ticket with the party I think might be responsible just to get it out of the way so I can get to higher priority stuff, and I’ll rely on that party to let me know if it’s not actually their problem. Heck it might even simply be the IT person assumed it was a misrouted ticket, since users open tickets in random queues all the time
They didn’t set up their PKI correctly and opening our webpage on specific hosts gave the typical “go back” warning. But it was our fault somehow, even though the certificate was the one they supplied us and it was valid.
If the certificate is correctly generated and valid an SSL error would indicate it was incorrectly applied to the application. I’m guessing by the inclusion in this rant that the conclusion was it was in fact a problem with the certificate, but we don’t have enough details to speculate if it was truly a mistake by the individual that generated it or just a miscommunication
Honestly it sounds like you’re too quick to bash IT and should instead be more open to dialogue. I don’t know the specifics of your workplace and communications, but if you approach challenges with other teams from an “us vs them” standpoint it’s just going to create conflict. Sometimes the easiest way to do it is to try to hop on a quick call with the person once you get to more than a couple of emails back and forth, plus then you have more social cues to avoid getting angry with eachother and can give more relevant details
That’s the face I’ve made just yesterday when my friend told me she’s now eligible for a subsidized IT mortgage. That thing was one of Russia’s last ditch attempts at stopping skilled workers from fucking off to different countries. The problem is, she’s a web designer. I guess that counts as IT nowadays, so good for her. But it’s bitter to hear as sr. backend tech who never hit the criteria…
That’s pretty much how the Russian economy works right now, in a nut shell. To stop emigration caused by the expensive war, they’re giving away a ton of expensive handouts.
The interest rate is at 19% and counting. Very cool, very sustainable. I have a feeling “the last laugh” will be yours, OP, even if they win in Ukraine.
Yep, I know the face… made it a few times with colleagues that don’t know basic Windows scripting, but somehow got bonuses… but I don’t kiss upper management ass, so I never do. That’s life I guess…
How is software not a subset of IT?
Think of it like an engine: The mechanics working on the engine aren’t the engineers designing the thing.
Honest question: what do we call who is driving the engine?
Drivers would be end users, Clients and project managers sometimes.
Think about it. Many drivers don’t know about checking the oil, maintaining proper tire pressure, tire wear, brake wear, air filters or topping off fluids.
I can do all of the above, but I’m nowhere near a mechanic. Just car savvy. So I could make suggestions to mechanics or engineers that look cool but are insane for functionality.
A user
Infrastructure maintenance is management, security and day to day business, while software engineering is mostly concerned with itself. They use distinct tools and generally have nothing to do with each other (except maybe integration).
We need new terms, IT means “works with computers, but more than Word and Excel” for too many people. In Switzerland they split the apprenticeship names to ‘platform engineer’ and ‘application engineer’, which I think is fitting.
Well yeah, it’d be like if an advertising copy writer said their job was “English”.
IT is an administrative function and is really part of operations.
Software development is generally a creative position and is a profit center. If you work somewhere where you develop internal apps, you may have a different perspective.
My current workplace organizes both development and infrastructure within IT which itself is a sub department of finance. I’m not saying this is the best approach because honestly it only took 1.5 layers of apathetic management to make long term planning a nonstarter