Ask HN: How does the cost of CPU-bound computations scale in cloud computing?

Let's say that I'm a rich mathematician (that would be nice) and want to sponsor some computation that takes a certain gargantuan number N of floating or integer operations, and it can be parallelized easily in the cloud, and my programmers have come up with a reasonably efficient implementation, which I benchmark with some smaller-scale tests. The I/O and storage costs are low; the bulk of the cost is going to be pure computation.

Now I'm ready to get out my checkbook.

What factors influence the cost of such a computation in today's cloud computing market?

How does the price scale with N and the rate at which I want to complete the problem? (for example, if it takes 1000 AWS instances 10 months or 10,000 AWS instances 1 month, am I going to pay the same? I'm assuming the length of time to run this is less than a year, so not long enough that the cost would decrease significantly during the computation.)

Are there aspects of flexibility that would decrease the price? (example: the cloud servers can put my computation on hold or throttle the CPU anytime they want in the short term, as long as the average rate my computation is running in any given day is, say, more than 80% of full-speed)

Lenstra used the term "dollardays" to describe the cost of breaking cryptographic keys as something that scales with the capital cost of the computing equipment (40 million dollardays = 40 days on $1 million worth of computers, or 1 day on $40 million worth of computers) but with cloud computing it seems more like it would be a piecework pricing structure.

1 points | by jason_s 2 hours ago

0 comments