That’s pretty much what I learned over a couple of years from watching my dad loading up our car for every family holiday.
Watch me shaking my head over my in-laws dishwasher, and then and re-arranging it while everyone else tut-tuts over at the coffee table.
BlockOut on the Amiga and the Lynx were fun. But not as satisfying as getting a clean kitchen in one go.
Other people’s dishwashers?
Close the door and
I refuse to give up on humanity.
Oh. Did I just post the dishwasher singularity up there? Damn.
… “put the largest objects in first and then proceed in decreasing order of size” always worked for me
Sarah Silverman is suing the creator of ChatGPT for unauthorized use of her 2010 book “The Bedwetter,” according to a lawsuit Friday in a U.S. District Court.
The comic has joined authors Richard Kadrey and Christopher Golden in two class-action lawsuits against tech giants OpenAI and Meta, the creator of rival AI chatbot LLaMA, which were reported by The Verge on Sunday.
Not wrong as such, but also too narrowly considered. Depending on how the boot of the car is shaped and on how erratically formed (or deformed) the cargo you are about to cram into it is, laying down a layer of small-ish objects to create an more or less even load bed (and filling all the nooks and crannies) will make it easier and more efficient putting in the larger and, one hopes, more evenly shaped stuff. But don’t use up all the small stuff right away. You’ll need some to fill the space between the uneven larger stuff and the final layer right underneath the bootlid.
Pro tip: when parking on an incline, always make sure the boot points upwards to prevent embarrassing spills.
When you load up a new or unfamiliar car, work out a rough strategy first. Check where the manufacturer has placed stuff you might need during the journey like the spare wheel, the jack, the first aid kit etc. You can render them totally inaccessible more efficiently this way.
Make sure that off-the-shelf AI model is legit – it could be a poisoned dependency
French outfit Mithril Security has managed to poison a large language model (LLM) and make it available to developers – to prove a point about misinformation.
That hardly seems necessary, given that LLMs like OpenAI’s ChatGPT, Google’s Bard, and Meta’s LLaMA already respond to prompts with falsehoods. It’s not as if lies are in short supply on social media distribution channels.
But the Paris-based startup has its reasons, one of which is convincing people of the need for its forthcoming AICert service for cryptographically validating LLM provenance.
[…]
ChatGPT response:
Oh wait, that’s CatGPT.
Can you really call a trend “growth” when it’s simply a measure of how many people were playing around with a trial version of something? That’s like reporting projected sales of All New Food Product based on how many hungry Costco shoppers snagged a sample on their way by…
That might cause some Originalist head-scratching.
“What if the 2nd Amendment was a hallucination?”