Due to my strict browser security -
Sometimes Facebook will force me to repeat a captcha. Sometimes Google will force me to go through five levels of captchas - no exaggeration!
But worst of all is Cloudflare, who block me outright. There are only two ways to avoid it:
Cloudflare now oversees ingress for a large chunk of the internet - maybe a fifth of all major websites - and they are probably most famous for their DDoS protection and thus their bot detection abilities.
This was worst around 2022, when I was blocked even by some relatively big tech companies such as OpenAI, but it continues today with many smaller websites (such as regional newspapers).
Cloudflare can see, block or tamper with the plaintext of any communications on sites it 'protects'. Most often this is used to escape or block forum posts containing snippets of code. I believe it is enabled by default, leading to much greater difficulty publishing code anywhere, even on programming-oriented forums such as Hacker News.
Besides all this, there's an argument that Cloudflare is the reason why DDoS attacks returned to the internet (hackers used to DDoS each other, but Cloudflare offered hackers DDoS protection, basically stopping their 'civil war' and renewing the war against non-hackers outside Cloudflare's protection).
Years ago, my main Reddit account was permanently banned by Reddit, because they thought I had been hacked - but I had simply been using a Python script that intercepted all my browser traffic. Totally understandable for them to do this, except they don't respond to appeals and creating a new account violates their ToS.
It used to be that Google would force me to complete captchas every few minutes if I went beyond the 2nd page of results. And Google captchas used to be awful - multiple pages of captchas of slow-fading image blocks.
This is despite my IP address
Everything, thus far, is pure CSS and HTML. I wanted to avoid using JavaScript as much as possible.
I found
The sidebar (on desktop) becomes a header (on mobile). Change your browser's window size to see the problem it creates - a table of contents is more natural to display as a sidebar than as a header. To mitigate this, I grouped sub-headers into containers that automatically re-orient depending on whether the screen is in landscape or portrait mode.
Previously, I displayed a list of all blog pages on the sidebar/header of all blog pages. But the sidebar became valuable real estate when their contents grew, so I removed that and replaced it with a single link to
In the blog overview page, I wanted to display previews of all blog posts. The lazy solution to that was just to copy the raw HTML contents, use CSS to make it unclickable, and add a “click here” link to the full blog article. The problem with that is that it isn't immediately clear to the viewer that links in the preview are not clickable - so I muted the colour contrasts, which is sort of the universal way of signalling that a UI area is 'disabled'. In particular, I ensured that links in the previews have a muted blue colour - I could have achieved the same effect by simply reducing the preview element's opacity, but that can have unintended side-effects (including slower page rendering).
I found myself often clicking my circular avatar as a way to go back to the main website. This is surprising, because my avatar wasn't clickable. This behaviour was instinctively ingrained in me, presumably because it is so common on other websites that peoples' avatars are clickable. Presumably this instinct is even more ingrained in other people, so I moved my avatar into the link to make it clickable too.
This blog has a different style to the rest of my website. A light-themed blog on a broadly dark-themed website - it doesn't sound like a good idea.
The reason for this is that, to me, a dark-themed blog looks too 'l33t' - too edgy, too new-style.
Dark themes are almost the default for programming - GitHub, VSCode etc - seemingly as a way to distinguish themselves from the professional looks of other industries, just like they do in other ways: wearing hoodies to work, using black gaming laptops with RGB lighting, using monospace fonts.
Light themes are the default elsewhere - Word, Excel, Exchange, white printers, white desktops, shirts - because that is the world of paper. It makes sense to me that the simple, low-tech portion of my website that consists only of words - the blog - should fit this theme of being paper on a screen.
The sidebar table-of-contents should highlight the currently-in-focus section, and it should show only the highest-level headers plus ancestors of the currently-in-focus section (see
Look into
I never want to read a romance book. That's unmanly. But I want to read a hard scifi book that becomes a romance. A hero who quits his quest halfway through because he gets comfortable with a woman.
One problem I have is I'll read through half of a book, to where everything is kind of nice and cosy, and I'll stop there because I can see my fantasy will be ruined by the main character making a stupid mistake that breaks up his nice dating or family life or something.
Beowulf
I remember it being a slog to read. Not enjoyable, and only somewhat interesting.
I believe I read a version that was translated into modern-ish English, like
It's not even set in England! It's set in Denmark or Sweden. I think that was a disappointment to me - at the time I was very confused and couldn't quite understand if this was actually Beowulf, because it didn't match my expectations at all.
Both of these are merged in my mind. They were both Napoleonic-era naval series, about a man's naval career and the adventures along the way. One of these protagonists was embarrassingly 'goodie-two-shoes' - stoic, one-dimensional.
I read either Hornblower or Bolitho books when I was very young - maybe 9 years old. When I had my first crush on a girl, I didn't realise she was Asian - school only taught us about White, Brown and Black people - and because these books described the lifelong effects of 'Yellow Fever' (reoccurance of yellow skin), I just assumed that she had inherited it from her parents, because they all had quite yellow skin. But I was socially-aware enough not to ever ask her about it.
I was also young when I read these books, maybe age 10 or 11. I remember being annoyed that Sharpe (was he a Lieutenant or Captain at the time?) was 'weakened' when he fell in love with a woman.
I can't remember if this book was a stand-alone or was part of the Sharpe books. It was set in the Spanish theatre of the Napoleonic Wars, and the title comes from Spanish anger at the French strategy of 'burned ground' (retreating and burning everything to stop the enemy taking supplies). In reality, the British started this tactic, and probably engaged in it more than the French, including in their ally Portugal's lands, leading to Portuguese anger at Britain (none of which is covered in this book, nor should you expect from this style of book).
This book was about a team of astronauts landing on and investigating a mysterious object in space. It sent them back into Genghis Khan's era. I don't remember much about it, but the woman became a concubine of Khan (ew).
I got into this hobby due to 3 things:
Gaussian splatting is a graphics method developed in the 1990s as an alternative to tesselated (triangular) rendering. Although tesselated rendering has been the norm for decades, splatting returned around 2022 or 2023 due to research papers, based on neural network techniques (possibly specifically NeRF - neural radiance fields), which suddenly turned splatting into the fastest way to render realistic scenes, because it embeds all lighting information into each gaussian using 'spherical harmonics' or something.
See
I used to be very skeptical of photogrammetry - it struggled with hair and glass, and always required enormous manual effort to clean up the files to be usable in a 3D render (i.e. to add light maps). The professional tools were probably much better at this - I imagine the film industry has ways to scan objects and automatically produce those light maps - but it couldn't be done with a 'prosumer' DSLR camera.
Gaussian splatting completely solves the hair, glass, lighting and size problems for static scenes.
Cardboard. That basic material that protects our Amazon deliveries and cereal boxes. Have you ever stopped to think about what happens to cardboard after it's served its purpose? Recycling, you'd think, right? Yes, of course. But. But but but.
Let's start with the basics. Recycling cardboard is like getting a Dundie Award for Participation. You're not exactly saving the world, but hey, at least you're trying. You break down the box, you toss it in the bin, and you feel a little spark of virtue. But then you remember that time your neighbor threw a greasy pizza box in the recycling bin, and now the whole batch is contaminated. This is why we can't have nice things.
Recycling, much like Jim's pranks on Dwight, is a delicate balance. Do it right, and you're a hero. Do it wrong, and you're the guy who put a stapler in Jell-O. And let's be real, most of us are the stapler-in-Jell-O guy when it comes to recycling.
When I went for my operation, the nurse told me that I'd have to take off anything that wasn't 100% cotton. I guess polyester fucks things up if it gets inside a wound.
So what if I'm riding a bike, and I scrape my leg bloody? Presumably I should wear 100% cotton clothes all the time to avoid getting polyester inside me?
I've also heard that fabric softener makes clothes more flammable - presumably something you want to avoid if you work in a lab. Presumably polyester melts and clings to skin - so you'd want 100% cotton for that reason too.
IIRC, there's a criminal case where a child was convicted of burning her mother to death, based on the fact that an 'accelerant' was discovered on the mother's body - which was later overturned when an expert engineer testified that it could easily have been an accident caused by plastic melting into a burning liquid like napalm. I wonder how many people have been wrongfully accused of horrific acts just because of a freak accident setting off some of the death traps of modern life.
Teachers who paste their students' coursework into ChatGPT and ask it if it was written by AI - then falsely accuse the student of cheating.
'AI detection' tools that show 60% similarity to AI text (quite normal for a well-read human who knows 'complicated' words) - then falsely interpreted by staff as 60% chance of being written by AI.
On HN, someone wrote that written answers are part of the interview process at their company. To combat interviewees using AI, there are 'secret' instructions (in size 0 text), so that if an interviewee copies it into ChatGPT, these 'secret' instructions result in ChatGPT using specific words. Thus any response using these words is automatically discarded. So that means, as a human, you should select all the text, copy it into an ASCII text editor, and look for the words that you should avoid writing!
I'm going through some of my school-era web scraping projects. One of these was my first 'big' Python project, and it taught me Object Oriented Programming (OOP), which I had been fervently avoiding previously.
OOP used to confuse me, because the guides didn't explain why you would use it. They simply used examples, like a class 'Cat' and class 'Dog' both inheriting from class 'Animal' - but then they'd add a function 'bark' for 'Dog' class, which is not a use case for OOP!
At the time of writing, I haven't looked at the code again, but the data is structured with each user, post, forum and comment in their own unique subdirectories - tens of thousands of directories. I had diligently collected all the metadata needed to display comments, but for some reason the list of avatar file names was stored as a text file underneath a subdirectory in the user subdirectory. Hundreds of megabytes - the majority of the disk size - was used on these inodes instead of data!
With just a few minutes' investment, I repackaged this data into combined JSON files, remembering to keep the file modification times, because the API did not provide timestamps and file modification times are the only method to know roughly what time the comments were posted (making some decisions such as getting modification times from directories instead of files).