Browse Source

wip: considering making vdom passed verywehre

Jonathan Kelley 3 years ago
parent
commit
ac3a7b1

+ 1 - 1
packages/core/.vscode/settings.json

@@ -1,3 +1,3 @@
 {
-  "rust-analyzer.inlayHints.enable": false
+  "rust-analyzer.inlayHints.enable": true
 }

+ 2 - 0
packages/core/Cargo.toml

@@ -42,6 +42,8 @@ once_cell = "1.8.0"
 # # Serialize the Edits for use in Webview/Liveview instances
 serde = { version = "1", features = ["derive"], optional = true }
 
+indexmap = "1.7.0"
+
 
 [dev-dependencies]
 anyhow = "1.0.42"

+ 6 - 7
packages/core/src/diff.rs

@@ -105,23 +105,21 @@ use DomEdit::*;
 /// Funnily enough, this stack machine's entire job is to create instructions for another stack machine to execute. It's
 /// stack machines all the way down!
 pub struct DiffMachine<'bump> {
-    vdom: &'bump Scheduler,
+    pub vdom: &'bump Scheduler,
 
     pub mutations: Mutations<'bump>,
 
     pub stack: DiffStack<'bump>,
+
     pub diffed: FxHashSet<ScopeId>,
+
     pub seen_scopes: FxHashSet<ScopeId>,
 }
 
 impl<'bump> DiffMachine<'bump> {
-    pub(crate) fn new(
-        edits: Mutations<'bump>,
-        cur_scope: ScopeId,
-        shared: &'bump Scheduler,
-    ) -> Self {
+    pub(crate) fn new(edits: Mutations<'bump>, shared: &'bump Scheduler) -> Self {
         Self {
-            stack: DiffStack::new(cur_scope),
+            stack: DiffStack::new(),
             mutations: edits,
             vdom: shared,
             diffed: FxHashSet::default(),
@@ -138,6 +136,7 @@ impl<'bump> DiffMachine<'bump> {
     //
     pub async fn diff_scope(&mut self, id: ScopeId) {
         if let Some(component) = self.vdom.get_scope_mut(id) {
+            self.stack.scope_stack.push(id);
             let (old, new) = (component.frames.wip_head(), component.frames.fin_head());
             self.stack.push(DiffInstruction::DiffNode { new, old });
             self.work().await;

+ 2 - 2
packages/core/src/diff_stack.rs

@@ -47,11 +47,11 @@ pub struct DiffStack<'bump> {
 }
 
 impl<'bump> DiffStack<'bump> {
-    pub fn new(cur_scope: ScopeId) -> Self {
+    pub fn new() -> Self {
         Self {
             instructions: Vec::with_capacity(1000),
             nodes_created_stack: smallvec![],
-            scope_stack: smallvec![cur_scope],
+            scope_stack: smallvec![],
         }
     }
 

+ 128 - 48
packages/core/src/scheduler.rs

@@ -1,3 +1,73 @@
+/*
+Welcome to Dioxus's cooperative, priority-based scheduler.
+
+I hope you enjoy your stay.
+
+Some essential reading:
+- https://github.com/facebook/react/blob/main/packages/scheduler/src/forks/Scheduler.js#L197-L200
+- https://github.com/facebook/react/blob/main/packages/scheduler/src/forks/Scheduler.js#L440
+- https://github.com/WICG/is-input-pending
+- https://web.dev/rail/
+- https://indepth.dev/posts/1008/inside-fiber-in-depth-overview-of-the-new-reconciliation-algorithm-in-react
+
+# What's going on?
+
+Dioxus is a framework for "user experience" - not just "user interfaces." Part of the "experience" is keeping the UI
+snappy and "jank free" even under heavy work loads. Dioxus already has the "speed" part figured out - but there's no
+point if being "fast" if you can't also be "responsive."
+
+As such, Dioxus can manually decide on what work is most important at any given moment in time. With a properly tuned
+priority system, Dioxus can ensure that user interaction is prioritized and committed as soon as possible (sub 100ms).
+The controller responsible for this priority management is called the "scheduler" and is responsible for juggle many
+different types of work simultaneously.
+
+# How does it work?
+
+Per the RAIL guide, we want to make sure that A) inputs are handled ASAP and B) animations are not blocked.
+React-three-fiber is a testament to how amazing this can be - a ThreeJS scene is threaded in between work periods of
+React, and the UI still stays snappy!
+
+While it's straightforward to run code ASAP and be as "fast as possible", what's not  _not_ straightforward is how to do
+this while not blocking the main thread. The current prevailing thought is to stop working periodically so the browser
+has time to paint and run animations. When the browser is finished, we can step in and continue our work.
+
+React-Fiber uses the "Fiber" concept to achieve a pause-resume functionality. This is worth reading up on, but not
+necessary to understand what we're doing here. In Dioxus, our DiffMachine is guided by DiffInstructions - essentially
+"commands" that guide the Diffing algorithm through the tree. Our "diff_scope" method is async - we can literally pause
+our DiffMachine "mid-sentence" (so to speak) by just stopping the poll on the future. The DiffMachine periodically yields
+so Rust's async machinery can take over, allowing us to customize when exactly to pause it.
+
+React's "should_yield" method is more complex than ours, and I assume we'll move in that direction as Dioxus matures. For
+now, Dioxus just assumes a TimeoutFuture, and selects! on both the Diff algorithm and timeout. If the DiffMachine finishes
+before the timeout, then Dioxus will work on any pending work in the interim. If there is no pending work, then the changes
+are committed, and coroutines are polled during the idle period. However, if the timeout expires, then the DiffMachine
+future is paused and saved (self-referentially).
+
+# Priorty System
+
+So far, we've been able to thread our Dioxus work between animation frames - the main thread is not blocked! But that
+doesn't help us _under load_. How do we still stay snappy... even if we're doing a lot of work? Well, that's where
+priorities come into play. The goal with priorities is to schedule shorter work as a "high" priority and longer work as
+a "lower" priority. That way, we can interrupt long-running low-prioty work with short-running high-priority work.
+
+React's priority system is quite complex.
+
+There are 5 levels of priority and 2 distinctions between UI events (discrete, continuous). I believe React really only
+uses 3 priority levels and "idle" priority isn't used... Regardless, there's some batching going on.
+
+For Dioxus, we're going with a 4 tier priorty system:
+- Sync: Things that need to be done by the next frame, like TextInput on controlled elements
+- High: for events that block all others - clicks, keyboard, and hovers
+- Medium: for UI events caused by the user but not directly - scrolls/forms/focus (all other events)
+- Low: set_state called asynchronously, and anything generated by suspense
+
+In "Sync" state, we abort our "idle wait" future, and resolve the sync queue immediately and escape. Because we completed
+work before the next rAF, any edits can be immediately processed before the frame ends. Generally though, we want to leave
+as much time to rAF as possible. "Sync" is currently only used by onInput - we'll leave some docs telling people not to
+do anything too arduous from onInput.
+
+For the rest, we defer to the rIC period and work down each queue from high to low.
+*/
 use std::cell::{Cell, RefCell, RefMut};
 use std::fmt::Display;
 use std::{cell::UnsafeCell, rc::Rc};
@@ -7,6 +77,7 @@ use crate::innerlude::*;
 use futures_channel::mpsc::{UnboundedReceiver, UnboundedSender};
 use futures_util::stream::FuturesUnordered;
 use fxhash::{FxHashMap, FxHashSet};
+use indexmap::IndexSet;
 use slab::Slab;
 use smallvec::SmallVec;
 
@@ -376,14 +447,42 @@ impl Scheduler {
         // unsafe { std::mem::transmute(fib) }
     }
 
-    /// If a the fiber finishes its works (IE needs to be committed) the scheduler will drop the dirty scope
+    /// The primary workhorse of the VirtualDOM.
     ///
+    /// Uses some fairly complex logic to schedule what work should be produced.
+    ///
+    /// Returns a list of successful mutations.
     ///
     ///
     pub async fn work_with_deadline<'a>(
         &'a mut self,
         deadline: &mut Pin<Box<impl FusedFuture<Output = ()>>>,
     ) -> Vec<Mutations<'a>> {
+        /*
+        Strategy:
+        - When called, check for any UI events that might've been received since the last frame.
+        - Dump all UI events into a "pending discrete" queue and a "pending continuous" queue.
+
+        - If there are any pending discrete events, then elevate our priorty level. If our priority level is already "high,"
+            then we need to finish the high priority work first. If the current work is "low" then analyze what scopes
+            will be invalidated by this new work. If this interferes with any in-flight medium or low work, then we need
+            to bump the other work out of the way, or choose to process it so we don't have any conflicts.
+            'static components have a leg up here since their work can be re-used among multiple scopes.
+            "High priority" is only for blocking! Should only be used on "clicks"
+
+        - If there are no pending discrete events, then check for continuous events. These can be completely batched
+
+
+        Open questions:
+        - what if we get two clicks from the component during the same slice?
+            - should we batch?
+            - react says no - they are continuous
+            - but if we received both - then we don't need to diff, do we? run as many as we can and then finally diff?
+
+
+
+
+        */
         let mut committed_mutations = Vec::new();
 
         // TODO:
@@ -509,6 +608,28 @@ impl Scheduler {
     }
 }
 
+pub struct PriortySystem {
+    pub dirty_scopes: IndexSet<ScopeId>,
+    pub machine: DiffMachine<'static>,
+}
+
+impl PriortySystem {
+    pub fn new() -> Self {
+        Self {
+            machine: DiffMachine::new(edits, shared),
+            dirty_scopes: Default::default(),
+        }
+    }
+
+    fn has_work(&self) -> bool {
+        todo!()
+    }
+
+    fn work(&mut self) {
+        let scope = self.dirty_scopes.pop();
+    }
+}
+
 pub struct TaskHandle {
     pub sender: UnboundedSender<SchedulerMsg>,
     pub our_id: u64,
@@ -530,30 +651,6 @@ impl TaskHandle {
     pub fn restart(&self) {}
 }
 
-#[derive(PartialEq, Eq, Copy, Clone, Debug, Hash)]
-pub struct DirtyScope {
-    height: u32,
-    start_tick: u32,
-}
-
-pub struct PriortySystem {
-    pub pending_scopes: Vec<ScopeId>,
-    pub dirty_scopes: HashSet<ScopeId>,
-}
-
-impl PriortySystem {
-    pub fn new() -> Self {
-        Self {
-            pending_scopes: Default::default(),
-            dirty_scopes: Default::default(),
-        }
-    }
-
-    fn has_work(&self) -> bool {
-        self.pending_scopes.len() > 0 || self.dirty_scopes.len() > 0
-    }
-}
-
 #[derive(serde::Serialize, serde::Deserialize, Copy, Clone, PartialEq, Eq, Hash, Debug)]
 pub struct ScopeId(pub usize);
 
@@ -571,28 +668,6 @@ impl ElementId {
     }
 }
 
-// // Whenever a task is ready (complete) Dioxus produces this "AsyncEvent"
-// //
-// // Async events don't necessarily propagate into a scope being ran. It's up to the event itself
-// // to force an update for itself.
-// //
-// // Most async events should have a low priority.
-// //
-// // This type exists for the task/concurrency system to signal that a task is ready.
-// // However, this does not necessarily signal that a scope must be re-ran, so the hook implementation must cause its
-// // own re-run.
-// AsyncEvent {
-//     should_rerender: bool,
-// },
-
-// // Suspense events are a type of async event generated when suspended nodes are ready to be processed.
-// //
-// // they have the lowest priority
-// SuspenseEvent {
-//     hook_idx: usize,
-//     domnode: Rc<Cell<Option<ElementId>>>,
-// },
-
 /// Priority of Event Triggers.
 ///
 /// Internally, Dioxus will abort work that's taking too long if new, more important, work arrives. Unlike React, Dioxus
@@ -616,11 +691,16 @@ impl ElementId {
 /// flushed before proceeding. Multiple discrete events is highly unlikely, though.
 #[derive(Debug, PartialEq, Eq, Clone, Copy, Hash, PartialOrd, Ord)]
 pub enum EventPriority {
+    /// Work that must be completed during the EventHandler phase
+    ///
+    ///
+    Immediate = 3,
+
     /// "High Priority" work will not interrupt other high priority work, but will interrupt medium and low priority work.
     ///
     /// This is typically reserved for things like user interaction.
     ///
-    /// React calls these "discrete" events, but with an extra category of "user-blocking".
+    /// React calls these "discrete" events, but with an extra category of "user-blocking" (Immediate).
     High = 2,
 
     /// "Medium priority" work is generated by page events not triggered by the user. These types of events are less important

+ 2 - 7
packages/core/src/virtual_dom.rs

@@ -194,7 +194,7 @@ impl VirtualDom {
     }
 
     pub async fn diff_async<'s>(&'s mut self) -> Mutations<'s> {
-        let mut diff_machine = DiffMachine::new(Mutations::new(), self.base_scope, &self.scheduler);
+        let mut diff_machine = DiffMachine::new(Mutations::new(), &self.scheduler);
 
         let cur_component = self
             .scheduler
@@ -203,12 +203,7 @@ impl VirtualDom {
 
         cur_component.run_scope().unwrap();
 
-        diff_machine.stack.push(DiffInstruction::DiffNode {
-            old: cur_component.frames.wip_head(),
-            new: cur_component.frames.fin_head(),
-        });
-
-        diff_machine.work().await;
+        diff_machine.diff_scope(self.base_scope).await;
 
         diff_machine.mutations
     }