Follow

Thinking about scaling up ...

The current size of my scraped subset of Wikimedia Commons images is around 33GB. It will get way larger. We already know from my escapades with that local storage is hella expensive in AWS but S3 is super cheap.

My escapades with Paparouna CI have also told me that spot pricing for EC2 is hella cheap.

But these won't jive well - having all the data on S3 means idle time as the instances grab pieces of the dataset on startup.

· · Web · 0 · 0 · 0
Sign in to participate in the conversation
Pooper by Fantranslation.org

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!