Threading Models & The Agent Abstraction:
What is a thread? How do we decide on threading models for our software? Why do operating systems use preemptive multitasking when scheduling threads? Should we even be thinking in terms of threads?
This talk walks through a brief history of threads and thread coordination to the present where we now use preemptive multitasking for all but embedded applications. Judd will argue that threads are the wrong abstraction to think about when writing high performance software. Preemptive scheduling was created when we didn’t trust other processes running on our computers. With multicore systems and applications running on dedicated machines, we can get more predictable latency and higher throughput with cooperative multitasking. Various language constructs such as futures, await and coroutines attempt to address this.
Judd will use the Agent model to walk through the Basic Agent Framework (Part of https://github.com/real-logic/agrona) and show how it can be used to write high performance applications.
About The Speaker:
Judd Gaddie works at TransFICC where he is building a low-latency API for the fixed income electronic trading markets. He was previously Performance Team Lead at LMAX Exchange, where he was responsible for optimising the system for higher throughput and lower latency. He has spent many hours hunting down and trying to prevent performance regressions through automated performance testing as part of a Continuous Delivery pipeline.
Hot code is faster code, JVM warm-up & JIT compilation:
What is the JVM warm-up problem, and how does it affect our software? How can we aid the runtime in optimising our programs, and is it even a good idea to do so?
This presentation explores the lifecycle of Java code, and how the JVM evolves the runtime representation of code during program execution. From bytecode to assembly and back again (via some uncommon traps), we will cover practical tips on ensuring that Java code is running fast when it matters most.
Technically, the talk covers the operation of the JIT compiler, looking in detail at a number of optimisation techniques employed by the JVM. Benchmarks are used to demonstrate how the various JIT runtime flags can be used to alter the behaviour of the compiled code. The talk briefly looks at alternative technologies such as AOT compilers provided by other JVM vendors, and the experimental AOT compiler introduced in Java 9.
About The Speaker:
Mark Price has worked on high-throughput, large-scale, and low-latency distributed systems for more than ten years. Previously Lead Performance Engineer at LMAX Exchange and Senior Performance Engineer at Improbable, Mark is now a freelancer applying his experience of low-latency systems to multiple projects.
When not continuing his exploration of the limits of Java/JVM/OS performance, the topics that currently cause him to wake up at 3am for a quick coding session are: lock-free algorithms, kernel tracing with eBPF, and the delights of the Rust programming language.
** Please ensure you register for this event via Eventbrite including your full name & company for venue security purposes - https://www.eventbrite.co.uk/e/threading-models-the-agent-abstraction-hot-code-is-faster-code-jvm-warm-up-jit-compilation-tickets-48329932128 ***
6pm - Doors open
6.15pm - Threading Models & The Agent Abstraction - Judd Gaddie
6.55pm - Break
7.15pm - Hot code is faster code, JVM warm-up & JIT compilation - Mark Price
Big thanks to GridGrain Systems (www.gridgain.com) for sponsoring this event.
This event is organised by RecWorks on behalf of the London Java Community.
You can see our latest jobs here: https://recworks.co.uk/java-developer-jobs-london/.
Continue the conversation at our Slack Group: https://londonjavacommunity.slack.com
Sign up here if you're not a member: https://barrycranford.typeform.com/to/IIyQxd
Sign up on Eventbrite for Location