scheduler.rs 30 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779
  1. /*
  2. Welcome to Dioxus's cooperative, priority-based scheduler.
  3. I hope you enjoy your stay.
  4. Some essential reading:
  5. - https://github.com/facebook/react/blob/main/packages/scheduler/src/forks/Scheduler.js#L197-L200
  6. - https://github.com/facebook/react/blob/main/packages/scheduler/src/forks/Scheduler.js#L440
  7. - https://github.com/WICG/is-input-pending
  8. - https://web.dev/rail/
  9. - https://indepth.dev/posts/1008/inside-fiber-in-depth-overview-of-the-new-reconciliation-algorithm-in-react
  10. # What's going on?
  11. Dioxus is a framework for "user experience" - not just "user interfaces." Part of the "experience" is keeping the UI
  12. snappy and "jank free" even under heavy work loads. Dioxus already has the "speed" part figured out - but there's no
  13. point in being "fast" if you can't also be "responsive."
  14. As such, Dioxus can manually decide on what work is most important at any given moment in time. With a properly tuned
  15. priority system, Dioxus can ensure that user interaction is prioritized and committed as soon as possible (sub 100ms).
  16. The controller responsible for this priority management is called the "scheduler" and is responsible for juggling many
  17. different types of work simultaneously.
  18. # How does it work?
  19. Per the RAIL guide, we want to make sure that A) inputs are handled ASAP and B) animations are not blocked.
  20. React-three-fiber is a testament to how amazing this can be - a ThreeJS scene is threaded in between work periods of
  21. React, and the UI still stays snappy!
  22. While it's straightforward to run code ASAP and be as "fast as possible", what's not _not_ straightforward is how to do
  23. this while not blocking the main thread. The current prevailing thought is to stop working periodically so the browser
  24. has time to paint and run animations. When the browser is finished, we can step in and continue our work.
  25. React-Fiber uses the "Fiber" concept to achieve a pause-resume functionality. This is worth reading up on, but not
  26. necessary to understand what we're doing here. In Dioxus, our DiffMachine is guided by DiffInstructions - essentially
  27. "commands" that guide the Diffing algorithm through the tree. Our "diff_scope" method is async - we can literally pause
  28. our DiffMachine "mid-sentence" (so to speak) by just stopping the poll on the future. The DiffMachine periodically yields
  29. so Rust's async machinery can take over, allowing us to customize when exactly to pause it.
  30. React's "should_yield" method is more complex than ours, and I assume we'll move in that direction as Dioxus matures. For
  31. now, Dioxus just assumes a TimeoutFuture, and selects! on both the Diff algorithm and timeout. If the DiffMachine finishes
  32. before the timeout, then Dioxus will work on any pending work in the interim. If there is no pending work, then the changes
  33. are committed, and coroutines are polled during the idle period. However, if the timeout expires, then the DiffMachine
  34. future is paused and saved (self-referentially).
  35. # Priorty System
  36. So far, we've been able to thread our Dioxus work between animation frames - the main thread is not blocked! But that
  37. doesn't help us _under load_. How do we still stay snappy... even if we're doing a lot of work? Well, that's where
  38. priorities come into play. The goal with priorities is to schedule shorter work as a "high" priority and longer work as
  39. a "lower" priority. That way, we can interrupt long-running low-prioty work with short-running high-priority work.
  40. React's priority system is quite complex.
  41. There are 5 levels of priority and 2 distinctions between UI events (discrete, continuous). I believe React really only
  42. uses 3 priority levels and "idle" priority isn't used... Regardless, there's some batching going on.
  43. For Dioxus, we're going with a 4 tier priorty system:
  44. - Sync: Things that need to be done by the next frame, like TextInput on controlled elements
  45. - High: for events that block all others - clicks, keyboard, and hovers
  46. - Medium: for UI events caused by the user but not directly - scrolls/forms/focus (all other events)
  47. - Low: set_state called asynchronously, and anything generated by suspense
  48. In "Sync" state, we abort our "idle wait" future, and resolve the sync queue immediately and escape. Because we completed
  49. work before the next rAF, any edits can be immediately processed before the frame ends. Generally though, we want to leave
  50. as much time to rAF as possible. "Sync" is currently only used by onInput - we'll leave some docs telling people not to
  51. do anything too arduous from onInput.
  52. For the rest, we defer to the rIC period and work down each queue from high to low.
  53. */
  54. use crate::heuristics::*;
  55. use crate::innerlude::*;
  56. use futures_channel::mpsc::{UnboundedReceiver, UnboundedSender};
  57. use futures_util::stream::FuturesUnordered;
  58. use futures_util::{future::FusedFuture, pin_mut, Future, FutureExt, StreamExt};
  59. use fxhash::{FxHashMap, FxHashSet};
  60. use indexmap::IndexSet;
  61. use slab::Slab;
  62. use smallvec::SmallVec;
  63. use std::{
  64. any::{Any, TypeId},
  65. cell::{Cell, RefCell, RefMut, UnsafeCell},
  66. collections::{BTreeMap, BTreeSet, BinaryHeap, HashMap, HashSet, VecDeque},
  67. fmt::Display,
  68. pin::Pin,
  69. rc::Rc,
  70. };
  71. #[derive(Clone)]
  72. pub(crate) struct EventChannel {
  73. pub task_counter: Rc<Cell<u64>>,
  74. pub sender: UnboundedSender<SchedulerMsg>,
  75. pub schedule_any_immediate: Rc<dyn Fn(ScopeId)>,
  76. pub submit_task: Rc<dyn Fn(FiberTask) -> TaskHandle>,
  77. pub get_shared_context: Rc<dyn Fn(ScopeId, TypeId) -> Option<Rc<dyn Any>>>,
  78. }
  79. pub enum SchedulerMsg {
  80. // events from the host
  81. UiEvent(UserEvent),
  82. // setstate
  83. Immediate(ScopeId),
  84. // tasks
  85. SubmitTask(FiberTask, u64),
  86. ToggleTask(u64),
  87. PauseTask(u64),
  88. ResumeTask(u64),
  89. DropTask(u64),
  90. }
  91. /// The scheduler holds basically everything around "working"
  92. ///
  93. /// Each scope has the ability to lightly interact with the scheduler (IE, schedule an update) but ultimately the scheduler calls the components.
  94. ///
  95. /// In Dioxus, the scheduler provides 4 priority levels - each with their own "DiffMachine". The DiffMachine state can be saved if the deadline runs
  96. /// out.
  97. ///
  98. /// Saved DiffMachine state can be self-referential, so we need to be careful about how we save it. All self-referential data is a link between
  99. /// pending DiffInstructions, Mutations, and their underlying Scope. It's okay for us to be self-referential with this data, provided we don't priority
  100. /// task shift to a higher priority task that needs mutable access to the same scopes.
  101. ///
  102. /// We can prevent this safety issue from occurring if we track which scopes are invalidated when starting a new task.
  103. ///
  104. ///
  105. pub(crate) struct Scheduler {
  106. /// All mounted components are arena allocated to make additions, removals, and references easy to work with
  107. /// A generational arena is used to re-use slots of deleted scopes without having to resize the underlying arena.
  108. ///
  109. /// This is wrapped in an UnsafeCell because we will need to get mutable access to unique values in unique bump arenas
  110. /// and rusts's guartnees cannot prove that this is safe. We will need to maintain the safety guarantees manually.
  111. pub pool: ResourcePool,
  112. pub heuristics: HeuristicsEngine,
  113. pub receiver: UnboundedReceiver<SchedulerMsg>,
  114. // Garbage stored
  115. pub pending_garbage: FxHashSet<ScopeId>,
  116. // In-flight futures
  117. pub async_tasks: FuturesUnordered<FiberTask>,
  118. // scheduler stuff
  119. pub current_priority: EventPriority,
  120. pub ui_events: VecDeque<UserEvent>,
  121. pub pending_immediates: VecDeque<ScopeId>,
  122. pub pending_tasks: VecDeque<UserEvent>,
  123. pub garbage_scopes: HashSet<ScopeId>,
  124. pub lanes: [PriorityLane; 4],
  125. }
  126. impl Scheduler {
  127. pub(crate) fn new() -> Self {
  128. /*
  129. Preallocate 2000 elements and 100 scopes to avoid dynamic allocation.
  130. Perhaps this should be configurable from some external config?
  131. */
  132. let components = Rc::new(UnsafeCell::new(Slab::with_capacity(100)));
  133. let raw_elements = Rc::new(UnsafeCell::new(Slab::with_capacity(2000)));
  134. let heuristics = HeuristicsEngine::new();
  135. let (sender, receiver) = futures_channel::mpsc::unbounded::<SchedulerMsg>();
  136. let task_counter = Rc::new(Cell::new(0));
  137. let channel = EventChannel {
  138. task_counter: task_counter.clone(),
  139. sender: sender.clone(),
  140. schedule_any_immediate: {
  141. let sender = sender.clone();
  142. Rc::new(move |id| sender.unbounded_send(SchedulerMsg::Immediate(id)).unwrap())
  143. },
  144. submit_task: {
  145. let sender = sender.clone();
  146. Rc::new(move |fiber_task| {
  147. let task_id = task_counter.get();
  148. task_counter.set(task_id + 1);
  149. sender
  150. .unbounded_send(SchedulerMsg::SubmitTask(fiber_task, task_id))
  151. .unwrap();
  152. TaskHandle {
  153. our_id: task_id,
  154. sender: sender.clone(),
  155. }
  156. })
  157. },
  158. get_shared_context: {
  159. let components = components.clone();
  160. Rc::new(move |id, ty| {
  161. let components = unsafe { &*components.get() };
  162. let mut search: Option<&Scope> = components.get(id.0);
  163. while let Some(inner) = search.take() {
  164. if let Some(shared) = inner.shared_contexts.borrow().get(&ty) {
  165. return Some(shared.clone());
  166. } else {
  167. search = inner.parent_idx.map(|id| components.get(id.0)).flatten();
  168. }
  169. }
  170. None
  171. })
  172. },
  173. };
  174. let pool = ResourcePool {
  175. components: components.clone(),
  176. raw_elements,
  177. channel,
  178. };
  179. let mut async_tasks = FuturesUnordered::new();
  180. Self {
  181. pool,
  182. receiver,
  183. async_tasks,
  184. pending_garbage: FxHashSet::default(),
  185. heuristics,
  186. ui_events: VecDeque::new(),
  187. pending_immediates: VecDeque::new(),
  188. pending_tasks: VecDeque::new(),
  189. garbage_scopes: HashSet::new(),
  190. current_priority: EventPriority::Low,
  191. // sorted high to low by priority (0 = immediate, 3 = low)
  192. lanes: [
  193. PriorityLane::new(),
  194. PriorityLane::new(),
  195. PriorityLane::new(),
  196. PriorityLane::new(),
  197. ],
  198. }
  199. }
  200. pub fn manually_poll_events(&mut self) {
  201. while let Ok(Some(msg)) = self.receiver.try_next() {
  202. self.handle_channel_msg(msg);
  203. }
  204. }
  205. // Converts UI events into dirty scopes with various priorities
  206. pub fn consume_pending_events(&mut self) {
  207. // consume all events that are "continuous" to be batched
  208. // if we run into a discrete event, then bail early
  209. while let Some(trigger) = self.ui_events.pop_back() {
  210. if let Some(scope) = self.pool.get_scope_mut(trigger.scope) {
  211. if let Some(element) = trigger.mounted_dom_id {
  212. let priority = match trigger.name {
  213. // clipboard
  214. "copy" | "cut" | "paste" => EventPriority::Medium,
  215. // Composition
  216. "compositionend" | "compositionstart" | "compositionupdate" => {
  217. EventPriority::Low
  218. }
  219. // Keyboard
  220. "keydown" | "keypress" | "keyup" => EventPriority::Low,
  221. // Focus
  222. "focus" | "blur" => EventPriority::Low,
  223. // Form
  224. "change" | "input" | "invalid" | "reset" | "submit" => EventPriority::Low,
  225. // Mouse
  226. "click" | "contextmenu" | "doubleclick" | "drag" | "dragend"
  227. | "dragenter" | "dragexit" | "dragleave" | "dragover" | "dragstart"
  228. | "drop" | "mousedown" | "mouseenter" | "mouseleave" | "mousemove"
  229. | "mouseout" | "mouseover" | "mouseup" => EventPriority::Low,
  230. "mousemove" => EventPriority::Medium,
  231. // Pointer
  232. "pointerdown" | "pointermove" | "pointerup" | "pointercancel"
  233. | "gotpointercapture" | "lostpointercapture" | "pointerenter"
  234. | "pointerleave" | "pointerover" | "pointerout" => EventPriority::Low,
  235. // Selection
  236. "select" | "touchcancel" | "touchend" => EventPriority::Low,
  237. // Touch
  238. "touchmove" | "touchstart" => EventPriority::Low,
  239. // Wheel
  240. "scroll" | "wheel" => EventPriority::Low,
  241. // Media
  242. "abort" | "canplay" | "canplaythrough" | "durationchange" | "emptied"
  243. | "encrypted" | "ended" | "error" | "loadeddata" | "loadedmetadata"
  244. | "loadstart" | "pause" | "play" | "playing" | "progress"
  245. | "ratechange" | "seeked" | "seeking" | "stalled" | "suspend"
  246. | "timeupdate" | "volumechange" | "waiting" => EventPriority::Low,
  247. // Animation
  248. "animationstart" | "animationend" | "animationiteration" => {
  249. EventPriority::Low
  250. }
  251. // Transition
  252. "transitionend" => EventPriority::Low,
  253. // Toggle
  254. "toggle" => EventPriority::Low,
  255. _ => EventPriority::Low,
  256. };
  257. scope.call_listener(trigger.event, element);
  258. // let receiver = self.immediate_receiver.clone();
  259. // let mut receiver = receiver.borrow_mut();
  260. // // Drain the immediates into the dirty scopes, setting the appropiate priorities
  261. // while let Ok(Some(dirty_scope)) = receiver.try_next() {
  262. // self.add_dirty_scope(dirty_scope, trigger.priority)
  263. // }
  264. }
  265. }
  266. }
  267. }
  268. // nothing to do, no events on channels, no work
  269. pub fn has_any_work(&self) -> bool {
  270. let pending_lanes = self.lanes.iter().find(|f| f.has_work()).is_some();
  271. pending_lanes || self.has_pending_events()
  272. }
  273. pub fn has_pending_events(&self) -> bool {
  274. self.ui_events.len() > 0
  275. }
  276. fn shift_priorities(&mut self) {
  277. self.current_priority = match (
  278. self.lanes[0].has_work(),
  279. self.lanes[1].has_work(),
  280. self.lanes[2].has_work(),
  281. self.lanes[3].has_work(),
  282. ) {
  283. (true, _, _, _) => EventPriority::Immediate,
  284. (false, true, _, _) => EventPriority::High,
  285. (false, false, true, _) => EventPriority::Medium,
  286. (false, false, false, _) => EventPriority::Low,
  287. };
  288. }
  289. /// re-balance the work lanes, ensuring high-priority work properly bumps away low priority work
  290. fn balance_lanes(&mut self) {}
  291. fn load_current_lane(&mut self) -> &mut PriorityLane {
  292. match self.current_priority {
  293. EventPriority::Immediate => &mut self.lanes[0],
  294. EventPriority::High => &mut self.lanes[1],
  295. EventPriority::Medium => &mut self.lanes[2],
  296. EventPriority::Low => &mut self.lanes[3],
  297. }
  298. }
  299. fn save_work(&mut self, lane: SavedDiffWork) {
  300. let saved: SavedDiffWork<'static> = unsafe { std::mem::transmute(lane) };
  301. self.load_current_lane().saved_state = Some(saved);
  302. }
  303. fn load_work(&mut self) -> SavedDiffWork<'static> {
  304. match self.current_priority {
  305. EventPriority::Immediate => todo!(),
  306. EventPriority::High => todo!(),
  307. EventPriority::Medium => todo!(),
  308. EventPriority::Low => todo!(),
  309. }
  310. }
  311. /// Work the scheduler down, not polling any ongoing tasks.
  312. ///
  313. /// Will use the standard priority-based scheduling, batching, etc, but just won't interact with the async reactor.
  314. pub fn work_sync<'a>(&'a mut self) -> Vec<Mutations<'a>> {
  315. let mut committed_mutations = Vec::new();
  316. self.manually_poll_events();
  317. if !self.has_any_work() {
  318. return committed_mutations;
  319. }
  320. self.consume_pending_events();
  321. while self.has_any_work() {
  322. self.shift_priorities();
  323. self.work_on_current_lane(|| false, &mut committed_mutations);
  324. }
  325. committed_mutations
  326. }
  327. /// The primary workhorse of the VirtualDOM.
  328. ///
  329. /// Uses some fairly complex logic to schedule what work should be produced.
  330. ///
  331. /// Returns a list of successful mutations.
  332. pub async fn work_with_deadline<'a>(
  333. &'a mut self,
  334. deadline: impl Future<Output = ()>,
  335. ) -> Vec<Mutations<'a>> {
  336. /*
  337. Strategy:
  338. - When called, check for any UI events that might've been received since the last frame.
  339. - Dump all UI events into a "pending discrete" queue and a "pending continuous" queue.
  340. - If there are any pending discrete events, then elevate our priorty level. If our priority level is already "high,"
  341. then we need to finish the high priority work first. If the current work is "low" then analyze what scopes
  342. will be invalidated by this new work. If this interferes with any in-flight medium or low work, then we need
  343. to bump the other work out of the way, or choose to process it so we don't have any conflicts.
  344. 'static components have a leg up here since their work can be re-used among multiple scopes.
  345. "High priority" is only for blocking! Should only be used on "clicks"
  346. - If there are no pending discrete events, then check for continuous events. These can be completely batched
  347. - we batch completely until we run into a discrete event
  348. - all continuous events are batched together
  349. - so D C C C C C would be two separate events - D and C. IE onclick and onscroll
  350. - D C C C C C C D C C C D would be D C D C D in 5 distinct phases.
  351. - !listener bubbling is not currently implemented properly and will need to be implemented somehow in the future
  352. - we need to keep track of element parents to be able to traverse properly
  353. Open questions:
  354. - what if we get two clicks from the component during the same slice?
  355. - should we batch?
  356. - react says no - they are continuous
  357. - but if we received both - then we don't need to diff, do we? run as many as we can and then finally diff?
  358. */
  359. let mut committed_mutations = Vec::<Mutations<'static>>::new();
  360. pin_mut!(deadline);
  361. loop {
  362. // Internalize any pending work since the last time we ran
  363. self.manually_poll_events();
  364. // Wait for any new events if we have nothing to do
  365. // todo: poll the events once even if there is work to do to prevent starvation
  366. if !self.has_any_work() {
  367. futures_util::select! {
  368. msg = self.async_tasks.next() => {}
  369. msg = self.receiver.next() => self.handle_channel_msg(msg.unwrap()),
  370. _ = (&mut deadline).fuse() => return committed_mutations,
  371. }
  372. }
  373. // Create work from the pending event queue
  374. self.consume_pending_events();
  375. // shift to the correct lane
  376. self.shift_priorities();
  377. let mut deadline_reached = || (&mut deadline).now_or_never().is_some();
  378. let finished_before_deadline =
  379. self.work_on_current_lane(&mut deadline_reached, &mut committed_mutations);
  380. if !finished_before_deadline {
  381. break;
  382. }
  383. }
  384. committed_mutations
  385. }
  386. /// Load the current lane, and work on it, periodically checking in if the deadline has been reached.
  387. ///
  388. /// Returns true if the lane is finished before the deadline could be met.
  389. pub fn work_on_current_lane(
  390. &mut self,
  391. deadline_reached: impl FnMut() -> bool,
  392. mutations: &mut Vec<Mutations>,
  393. ) -> bool {
  394. // Work through the current subtree, and commit the results when it finishes
  395. // When the deadline expires, give back the work
  396. let saved_state = self.load_work();
  397. // We have to split away some parts of ourself - current lane is borrowed mutably
  398. let mut shared = self.pool.clone();
  399. let mut machine = unsafe { saved_state.promote(&mut shared) };
  400. if machine.stack.is_empty() {
  401. let shared = self.pool.clone();
  402. self.current_lane().dirty_scopes.sort_by(|a, b| {
  403. let h1 = shared.get_scope(*a).unwrap().height;
  404. let h2 = shared.get_scope(*b).unwrap().height;
  405. h1.cmp(&h2)
  406. });
  407. if let Some(scope) = self.current_lane().dirty_scopes.pop() {
  408. let component = self.pool.get_scope(scope).unwrap();
  409. let (old, new) = (component.frames.wip_head(), component.frames.fin_head());
  410. machine.stack.push(DiffInstruction::Diff { new, old });
  411. }
  412. }
  413. let deadline_expired = machine.work(deadline_reached);
  414. let machine: DiffMachine<'static> = unsafe { std::mem::transmute(machine) };
  415. let mut saved = machine.save();
  416. if deadline_expired {
  417. self.save_work(saved);
  418. false
  419. } else {
  420. for node in saved.seen_scopes.drain() {
  421. self.current_lane().dirty_scopes.remove(&node);
  422. }
  423. let mut new_mutations = Mutations::new();
  424. std::mem::swap(&mut new_mutations, &mut saved.mutations);
  425. mutations.push(new_mutations);
  426. self.save_work(saved);
  427. true
  428. }
  429. }
  430. pub fn current_lane(&mut self) -> &mut PriorityLane {
  431. match self.current_priority {
  432. EventPriority::Immediate => &mut self.lanes[0],
  433. EventPriority::High => &mut self.lanes[1],
  434. EventPriority::Medium => &mut self.lanes[2],
  435. EventPriority::Low => &mut self.lanes[3],
  436. }
  437. }
  438. pub fn handle_channel_msg(&mut self, msg: SchedulerMsg) {
  439. match msg {
  440. SchedulerMsg::Immediate(_) => todo!(),
  441. SchedulerMsg::UiEvent(_) => todo!(),
  442. //
  443. SchedulerMsg::SubmitTask(_, _) => todo!(),
  444. SchedulerMsg::ToggleTask(_) => todo!(),
  445. SchedulerMsg::PauseTask(_) => todo!(),
  446. SchedulerMsg::ResumeTask(_) => todo!(),
  447. SchedulerMsg::DropTask(_) => todo!(),
  448. }
  449. }
  450. fn add_dirty_scope(&mut self, scope: ScopeId, priority: EventPriority) {
  451. todo!()
  452. // match priority {
  453. // EventPriority::High => self.high_priorty.dirty_scopes.insert(scope),
  454. // EventPriority::Medium => self.medium_priority.dirty_scopes.insert(scope),
  455. // EventPriority::Low => self.low_priority.dirty_scopes.insert(scope),
  456. // };
  457. }
  458. }
  459. pub(crate) struct PriorityLane {
  460. pub dirty_scopes: IndexSet<ScopeId>,
  461. pub saved_state: Option<SavedDiffWork<'static>>,
  462. pub in_progress: bool,
  463. }
  464. impl PriorityLane {
  465. pub fn new() -> Self {
  466. Self {
  467. saved_state: None,
  468. dirty_scopes: Default::default(),
  469. in_progress: false,
  470. }
  471. }
  472. fn has_work(&self) -> bool {
  473. todo!()
  474. }
  475. fn work(&mut self) {
  476. let scope = self.dirty_scopes.pop();
  477. }
  478. }
  479. pub struct TaskHandle {
  480. pub(crate) sender: UnboundedSender<SchedulerMsg>,
  481. pub(crate) our_id: u64,
  482. }
  483. impl TaskHandle {
  484. /// Toggles this coroutine off/on.
  485. ///
  486. /// This method is not synchronous - your task will not stop immediately.
  487. pub fn toggle(&self) {
  488. self.sender
  489. .unbounded_send(SchedulerMsg::ToggleTask(self.our_id))
  490. .unwrap()
  491. }
  492. /// This method is not synchronous - your task will not stop immediately.
  493. pub fn resume(&self) {
  494. self.sender
  495. .unbounded_send(SchedulerMsg::ResumeTask(self.our_id))
  496. .unwrap()
  497. }
  498. /// This method is not synchronous - your task will not stop immediately.
  499. pub fn stop(&self) {
  500. self.sender
  501. .unbounded_send(SchedulerMsg::ToggleTask(self.our_id))
  502. .unwrap()
  503. }
  504. /// This method is not synchronous - your task will not stop immediately.
  505. pub fn restart(&self) {
  506. self.sender
  507. .unbounded_send(SchedulerMsg::ToggleTask(self.our_id))
  508. .unwrap()
  509. }
  510. }
  511. #[derive(serde::Serialize, serde::Deserialize, Copy, Clone, PartialEq, Eq, Hash, Debug)]
  512. pub struct ScopeId(pub usize);
  513. #[derive(Copy, Clone, PartialEq, Eq, Hash, Debug)]
  514. pub struct ElementId(pub usize);
  515. impl Display for ElementId {
  516. fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
  517. write!(f, "{}", self.0)
  518. }
  519. }
  520. impl ElementId {
  521. pub fn as_u64(self) -> u64 {
  522. self.0 as u64
  523. }
  524. }
  525. /// Priority of Event Triggers.
  526. ///
  527. /// Internally, Dioxus will abort work that's taking too long if new, more important, work arrives. Unlike React, Dioxus
  528. /// won't be afraid to pause work or flush changes to the RealDOM. This is called "cooperative scheduling". Some Renderers
  529. /// implement this form of scheduling internally, however Dioxus will perform its own scheduling as well.
  530. ///
  531. /// The ultimate goal of the scheduler is to manage latency of changes, prioritizing "flashier" changes over "subtler" changes.
  532. ///
  533. /// React has a 5-tier priority system. However, they break things into "Continuous" and "Discrete" priority. For now,
  534. /// we keep it simple, and just use a 3-tier priority system.
  535. ///
  536. /// - NoPriority = 0
  537. /// - LowPriority = 1
  538. /// - NormalPriority = 2
  539. /// - UserBlocking = 3
  540. /// - HighPriority = 4
  541. /// - ImmediatePriority = 5
  542. ///
  543. /// We still have a concept of discrete vs continuous though - discrete events won't be batched, but continuous events will.
  544. /// This means that multiple "scroll" events will be processed in a single frame, but multiple "click" events will be
  545. /// flushed before proceeding. Multiple discrete events is highly unlikely, though.
  546. #[derive(Debug, PartialEq, Eq, Clone, Copy, Hash, PartialOrd, Ord)]
  547. pub enum EventPriority {
  548. /// Work that must be completed during the EventHandler phase.
  549. ///
  550. /// Currently this is reserved for controlled inputs.
  551. Immediate = 3,
  552. /// "High Priority" work will not interrupt other high priority work, but will interrupt medium and low priority work.
  553. ///
  554. /// This is typically reserved for things like user interaction.
  555. ///
  556. /// React calls these "discrete" events, but with an extra category of "user-blocking" (Immediate).
  557. High = 2,
  558. /// "Medium priority" work is generated by page events not triggered by the user. These types of events are less important
  559. /// than "High Priority" events and will take presedence over low priority events.
  560. ///
  561. /// This is typically reserved for VirtualEvents that are not related to keyboard or mouse input.
  562. ///
  563. /// React calls these "continuous" events (e.g. mouse move, mouse wheel, touch move, etc).
  564. Medium = 1,
  565. /// "Low Priority" work will always be pre-empted unless the work is significantly delayed, in which case it will be
  566. /// advanced to the front of the work queue until completed.
  567. ///
  568. /// The primary user of Low Priority work is the asynchronous work system (suspense).
  569. ///
  570. /// This is considered "idle" work or "background" work.
  571. Low = 0,
  572. }
  573. #[derive(Clone)]
  574. pub(crate) struct ResourcePool {
  575. /*
  576. This *has* to be an UnsafeCell.
  577. Each BumpFrame and Scope is located in this Slab - and we'll need mutable access to a scope while holding on to
  578. its bumpframe conents immutably.
  579. However, all of the interaction with this Slab is done in this module and the Diff module, so it should be fairly
  580. simple to audit.
  581. Wrapped in Rc so the "get_shared_context" closure can walk the tree (immutably!)
  582. */
  583. pub components: Rc<UnsafeCell<Slab<Scope>>>,
  584. /*
  585. Yes, a slab of "nil". We use this for properly ordering ElementIDs - all we care about is the allocation strategy
  586. that slab uses. The slab essentially just provides keys for ElementIDs that we can re-use in a Vec on the client.
  587. This just happened to be the simplest and most efficient way to implement a deterministic keyed map with slot reuse.
  588. In the future, we could actually store a pointer to the VNode instead of nil to provide O(1) lookup for VNodes...
  589. */
  590. pub raw_elements: Rc<UnsafeCell<Slab<()>>>,
  591. pub channel: EventChannel,
  592. }
  593. impl ResourcePool {
  594. /// this is unsafe because the caller needs to track which other scopes it's already using
  595. pub fn get_scope(&self, idx: ScopeId) -> Option<&Scope> {
  596. let inner = unsafe { &*self.components.get() };
  597. inner.get(idx.0)
  598. }
  599. /// this is unsafe because the caller needs to track which other scopes it's already using
  600. pub fn get_scope_mut(&self, idx: ScopeId) -> Option<&mut Scope> {
  601. let inner = unsafe { &mut *self.components.get() };
  602. inner.get_mut(idx.0)
  603. }
  604. // return a bumpframe with a lifetime attached to the arena borrow
  605. // this is useful for merging lifetimes
  606. pub fn with_scope_vnode<'b>(
  607. &self,
  608. _id: ScopeId,
  609. _f: impl FnOnce(&mut Scope) -> &VNode<'b>,
  610. ) -> Option<&VNode<'b>> {
  611. todo!()
  612. }
  613. pub fn try_remove(&self, id: ScopeId) -> Option<Scope> {
  614. let inner = unsafe { &mut *self.components.get() };
  615. Some(inner.remove(id.0))
  616. // .try_remove(id.0)
  617. // .ok_or_else(|| Error::FatalInternal("Scope not found"))
  618. }
  619. pub fn reserve_node(&self) -> ElementId {
  620. let els = unsafe { &mut *self.raw_elements.get() };
  621. ElementId(els.insert(()))
  622. }
  623. /// return the id, freeing the space of the original node
  624. pub fn collect_garbage(&self, id: ElementId) {
  625. todo!("garabge collection currently WIP")
  626. // self.raw_elements.remove(id.0);
  627. }
  628. pub fn insert_scope_with_key(&self, f: impl FnOnce(ScopeId) -> Scope) -> ScopeId {
  629. let g = unsafe { &mut *self.components.get() };
  630. let entry = g.vacant_entry();
  631. let id = ScopeId(entry.key());
  632. entry.insert(f(id));
  633. id
  634. }
  635. pub fn borrow_bumpframe(&self) {}
  636. }