It got surprisingly heavy in places, and I didn’t realize I had grown so attached to some of those characters!
Campaign 2 was great—I really loved the guest star and secondary plot, and I’m now on C3. Have been binging the hell out of it for the past 6 months or so
You ever get the feelin’ that sump’n ain’t right at the crick?
In no particular order, I listen to all of them regularly:
Omnibus - general obscure history hosted by indie rocker John Roderick and Jeopardy’s golden boy Ken Jennings
The Dollop - (mostly) American history with a leftist bent. One comedian reads a story the other hasn’t heard before.
Not Another D&D Podcast - apologies for the first episode, but great world- and character-building. Really shows how great cooperative storytelling can be
Last Podcast on the Left - comedy/horror. Conspiracies, cults, UFOs, and other weird shit. Their historical deep dives are awesome.
I listen to these regularly, but there’s a limited series podcast I like to recommend called S-Town. It’s excellent, especially if you’re from the southern US or grew up in a rural area. If you aren’t from the south or a rural area, it’ll probably be an extra-wild ride!
I’m the production manager and audio engineer for an independent venue, but I also do enough extracurricular, 1099 work that I needed to start spending money to write off on my taxes.
So, I bought a nice PC a few years ago, started using a friend’s old laptop (that I just replaced with my recent, copilot-infected purchase) to take multitrack recordings for local artists at work, and have been making my way into the mixing and mastering world at home. I figured getting some experience on the studio side would improve my live sound skills and give me something of a fallback, just in case.
Not quite sure how that’s panning out, but I have learned a few things and have gotten some decent sounds just recording with standard, live audio gear!
I’ve heard that Carla is the way to go, but how much more overhead will it cost when basically all the plugins I use are vst3? At least one project on my tower pc is pretty much maxed out as it is with them running natively on Windows.
My other issue is simply time: this is already side project stuff that I do for a little extra money/learning/career development, and at this point, I simply don’t have time to try alternatives.
If I was just researching and writing papers like I did back in grad school, Windows would be gone, but as it stands, the path of least resistance for the audio work I’m doing is just to deal with what I’ve got.
Got a new laptop recently. Copilot pops up, so I asked it how to permanently disable Copilot.
It gave me a wordy non-answer, along with a “fun fact” about my local area — totally relevant and not creepy at all.
Then, after I demanded it tell me how to permanently disable itself, Copilot gave me a completely wrong answer.
After specifying the “app or service” I’m using (Windows, you fucking clueless piece of shit), it then gave me a half-baked answer that called commands which weren’t installed by default.
I then used duckduckgo to figure out how to install the configuration tool copilot said to use but that Windows had decided to hide from me.
Good job completely wasting my time, you ai-loving fucks at Microsoft. I don’t need new reasons to nuke your shitty software and install Linux, but now I have them. If Linux had native vst3 support, I wouldn’t have even booted into Windows.
Edit: Stranger in a Strange Land is a great book, and being the sci-fi novel backgrounding hippie culture, I wouldn’t have expected Musk to have read it.
He wasn’t when we lost him, but I’m going to get it done soon.
That said, he’s now a senior and diabetic, so I think he’s gonna be an indoor boy from here.
Folks in Mississippi passed an initiative for a fairly lax medical law in 2020. Some Karen mayor of one of the suburbs around the capital city used judicial chicanery to get it thrown out at the State Supreme Court, along with the ability of the populace to vote on ballot measures going forward.
I doubt that OP was debating you in good faith, but it did happen at least once in the last few years. The Republicans certainly didn’t waste the opportunity to minimize the effects of democracy on their power.
I’m working my way through House of Leaves right now, and the real horror is the grad school flashbacks from trying to follow the footnotes.
Farscape is like Mormonism — gotta watch Trek first to catch the references!
ZIYAL: Why don’t you just let Garak design a dress on his own? You know whatever he comes up with will be beautiful.
GARAK: My dear, I find your blind adoration both flattering and disturbing, but she does have a point.
My TV is insulting like that. It technically has an EQ, but it makes no perceivable difference no matter what I do in it.
What the hell!
But assuming it worked, wouldn’t doing that strictly with sound frequencies cause issues? Like, okay, most voices are louder because I boosted their frequency, but now that one dude with a super low voice is quieter, plus any music in the show is distorted. Or something like that.
Not necessarily. Regardless of vocal range, around 400hz-2000hz makes up the body of what you hear in human speech, or the notes for instryments carrying a melody. Below that, say, 160-315hz is going to be the “warmth” and “fullness” of the sound, while 2.5khz-8khz is going to be the enunciation and clarity (think ch-sounds, ess-es, tee-s, etc).
Sure, if you start really going hard on an EQ, you could absolutely throw everything out of balance — if you cut out 12db at 250hz, all the warmth will be gone and everything will sound thin. If you scoop a bunch of 400hz-1.6khz, it will sound like a walkie-talkie, and if you make a large boost around 3khz-8khz, then everything will probably sound harsh and scratchy.
This is where, the listening environment becomes important to consider. Do you live near a busy highway or do you have a loud air conditioner? You don’t need to answer these questions in public, but those kinds of ambient sounds can compete with the enunciation frequencies, or add to the buildup of “mud” in the lower part of the spectrum.
The size, shape, material properties etc. of your room and furniture also play a role here. For example, a bunch of bare walls and hard surfaces will cause a lot of the high frequencies to bounce around, potentially causing a buildup of harshness. This is why recording studios and your high school band hall probably have those oddly-shaped, cloth-covered wall “decorations” that serve to neutralize the cavernous sound you’d get in a large, bare room.
Overall, compensating for the environment is where you should probably aim your EQ. That is, even if source material varies wildly, it’s probably best to try to EQ to the room you’re in rather than each, individual program.
The way to do it is to find a song you know by heart, that you know how it sounds in the best way possible (there are a few that, to me, sound great in my car and on my favorite pair of headphones, so I use those), and play that through your TV. Then, fiddle with the EQ until it’s as close to the ideal sound in your head as you can get it.
I would bet there is one mix created in surround sound (7.1 or Dolby Atmos or whatever), and then the end-user hardware does the down-mixing part, i.e. from Atmos with ~20 speakers to a pair of airpods.
In the music world, we usually make stereo mixes. Even though the software that I use has a button to downmix the stereo output to mono, I only print stereo files.
It’s defintely good practice to listen to the mix in mono for technical reasons and also because you just never know who’s going to be listening on what device—the ultimate goal being to make it sound as good as possible in as many listening environments as possible. Ironically, switching the output to mono is a great way to check for balance between instruments (including the vocals) in a stereo mix.
At any rate, I think the problem of dynamics control—and for that matter, equalization—for fine-tuning the listening experience at home is going to vary wildly from place to place and setup to setup. Therefore the hypothetical regulations should help consumers help themselves by requiring compression and eq controls on consumer devices!
Side tip: if your tv or home theater box has an equalizer, try cutting around 200-250hz and bring the overall volume up a tad to reduce the muddiness of vocals/dialogue. You could also try boosting around 2khz, but as a sound engineer primarily dealing with live performances, I tend to cut more often than I boost.
Audio compression is much older than 20 years! Though you’re probably right about it becoming available on consumer A/V devices more recently.
And you’re definitely correct that “pre-applying” compression and generally overdoing it will fuck up the sound for too many people.
The dynamic ranges that are possible (and arguably desirable) to achieve in a movie theater are much greater than what one could (or would even want to) achieve from some crappy TV speakers or cheap ear buds.
From what I understand, mastering for film is going to aim for the greatest dynamic range possible, because it’s always theoretically possible to narrow the range after the fact but not really vice-versa.
I think the direction to go with OP’s suggested regulation would be to require all consumer TV sets and home theater boxes to have a built-in compressor that can be accessed and adjusted by the user. This would probably entail allowing the user to blow their speakers if they set it incorrectly, but in careful hands, it could solve OP’s problem.
That said, my limited experience in this world is exclusive to mixing and mastering music and not film, so grain of salt and all that.
Judging by The Dawn of Everything sitting next to it, I’d guess that book is Debt: The First 5000 years by David Graeber!
I have to back into a parking spot in a shitty, shared driveway. If I don’t throw my (automatic transmission) car into neutral and coast into place, my car will decide I’m too close to the curb and just slam the fuck out of the brakes while still several feet away from where I intend to be. It sounds awful and it scared the absolute shit out of me several times before I internalized the workaround.
Good thing I’m not a fan of the backup camera in general, or this problem would be even more irritating, since the camera turns off when I go from reverse to neutral.
I started on a small instance that fortunately gave a heads up when they decided to shut down. When I moved to a second, small instance where I ported all my community subscriptions, it shut down with no warning. It’s a shame, because both instances were topically-focused and small enough to avoid defederation drama.
I love the idea of decentralized infrastructure, but now I’m on .world because I just don’t have the time or willpower to move every few months, and I definitely don’t have the wherewithal to run my own instance.
OneDrive decided to kick on after an overnight update and uploaded some projects and vst plugins to the cloud. Apparently, the files weren’t accessible except via the cloud, so I lost a few hours re-downloading my folders before I could do anything. I don’t know if I’ve ever been more furious over technology that I theoretically owned.
I got a PC in order to eventually go back to Linux, where at least I know that when something goes wrong, it’s generally my own fault and somewhat easy to troubleshoot. Unfortunately, the plugins I’ve been using only have Windows and Mac versions. If I had done a bit more research, I probably would have just gone with an apple device.
Sure, but why’s the coke mirror on the floor??
Clappell Roan
The Spews
Pogs in a Pile
Grizzly Beer
Waylon Wennings