|
@@ -59,6 +59,44 @@
|
|
|
* SDL_LoadFileAsync as a convenience function. This will handle allocating a
|
|
|
* buffer, slurping in the file data, and null-terminating it; you still check
|
|
|
* for results later.
|
|
|
+ *
|
|
|
+ * Behind the scenes, SDL will use newer, efficient APIs on platforms that
|
|
|
+ * support them: Linux's io_uring and Windows 11's IoRing, for example. If
|
|
|
+ * those technologies aren't available, SDL will offload the work to a thread
|
|
|
+ * pool that will manage otherwise-synchronous loads without blocking the app.
|
|
|
+ *
|
|
|
+ * ## Best Practices
|
|
|
+ *
|
|
|
+ * Simple non-blocking i/o--for an app that just wants to pick up data
|
|
|
+ * whenever it's ready without losing framerate waiting on disks to spin--can
|
|
|
+ * use whatever pattern works well for the program. In this case, simply call
|
|
|
+ * SDL_ReadAsyncIO, or maybe SDL_LoadFileAsync, as needed. Once a frame, call
|
|
|
+ * SDL_GetAsyncIOResult to check for any completed tasks and deal with the
|
|
|
+ * data as it arrives.
|
|
|
+ *
|
|
|
+ * If two separate pieces of the same program need their own i/o, it is legal
|
|
|
+ * for each to create their own queue. This will prevent either piece from
|
|
|
+ * accidentally consuming the other's completed tasks. Each queue does require
|
|
|
+ * some amount of resources, but it is not an overwhelming cost. Do not make a
|
|
|
+ * queue for each task, however. It is better to put many tasks into a single
|
|
|
+ * queue. They will be reported in order of completion, not in the order they
|
|
|
+ * were submitted, so it doesn't generally matter what order tasks are started.
|
|
|
+ *
|
|
|
+ * One async i/o queue can be shared by multiple threads, or one thread can
|
|
|
+ * have more than one queue, but the most efficient way--if ruthless
|
|
|
+ * efficiency is the goal--is to have one queue per thread, with multiple
|
|
|
+ * threads working in parallel, and attempt to keep each queue loaded with
|
|
|
+ * tasks that are both started by and consumed by the same thread. On modern
|
|
|
+ * platforms that can use newer interfaces, this can keep data flowing as
|
|
|
+ * efficiently as possible all the way from storage hardware to the app, with
|
|
|
+ * no contention between threads for access to the same queue.
|
|
|
+ *
|
|
|
+ * Written data is not guaranteed to make it to physical media by the time a
|
|
|
+ * closing task is completed, unless SDL_CloseAsyncIO is called with its
|
|
|
+ * `flush` parameter set to true, which is to say that a successful result
|
|
|
+ * here can still result in lost data during an unfortunately-timed power
|
|
|
+ * outage if not flushed. However, flushing will take longer and may be
|
|
|
+ * unnecessary, depending on the app's needs.
|
|
|
*/
|
|
|
|
|
|
#ifndef SDL_asyncio_h_
|