Abstract
Not long ago, we had the opportunity to test Apache Flink to see just how fast it would go on a moderately realistic task with fast hardware and with a good streaming transport layer underneath. Our goal was not so much careful comparison with other software, but flat-out speed, Flink against Flink. In the process, we learned a lot about what it takes to go fast. Some of the lessons were ones that we had “learned” a number of times before: – the bottleneck isn’t where you thought it was – copying data is expensive – context switches are expensive – measure twice, cut once But there were some real surprises along the way. The really important knobs weren’t quite what people say you should turn. One of the biggest surprises was the degree to which high performance libraries have threading built into them which makes the actual concurrrency much higher than the apparent concurrency. The result was that at least one cluster parameter needed to be adjusted by 30x to get real
Speaker
Details
- Date:
Day 1 - September, 12th - Time:
14:45 - 15:25 - Location:
Palais - Topic:
Applications