Twitter publishes a feature-rich API, and so long as you play within their rules, allows for certain classes of bots and auto-posting.
I’ve designed my own twitter bot, it intercepts TCP connections from a WiFi-enabled field camera, posts a tweet (with a picture attached) anytime the camera spots activity at the salt lick. Next step is to make it accept a direct message instructing it to manually take a snapshot on demand.
Twitter is pretty strict about rate-limits on auto-posts, luckily the posting frequency limits are exposed in the API so it is easy to slow down to avoid triggering this rule – some other activities (e.g. dups or even similar-but-not-identical posts) aren’t as well documented; and it’s easy to accidentally get temporarily banned.
Automation seems so central to what makes the internet possible, the pervasive anti-bot position I encounter seems a bit hypocritical. Just last month I was doing some Wayback Machine archaeology to follow some dead links from a site about historic chatbots, and a surprising (to me) number of linked bot sites were not crawled and backed up because of their posted “bots.txt” notices. Keep your bots away from my bots!
What I find depressing about contemporary use of bots is that are deployed simply to mimic the fickle and inane traits of humans online activity. People already create a lot of predictable traffic by posting their usual content on social media sites. What I find interesting about bots is that they aren’t human, and they are devoid of human biases. Making robots look and act like people, or code to imitate the concerns and expressions of people goes IMO in the complete opposite direction to what is interesting and beneficial about them
This topic was automatically closed after 5 days. New replies are no longer allowed.