瀏覽代碼

wip: bubbling

Jonathan Kelley 3 年之前
父節點
當前提交
7978a17

+ 51 - 0
README.md

@@ -216,3 +216,54 @@ Dioxus is heavily inspired by React, but we want your transition to feel like an
 - 🛠 = actively being worked on
 - 👀 = not yet implemented or being worked on
 - ❓ = not sure if will or can implement
+
+
+## FAQ:
+
+### Aren't VDOMs just pure overhead? Why not something like Solid or Svelte?
+-----
+Remember: Dioxus is a library - not a compiler like Svelte. Plus, the inner VirtualDOM allows Dioxus to easily port into different runtimes, support SSR, and run remotely in the cloud. VDOMs tend to more ergonomic to work with and feel roughly like natural Rust code. The overhead of Dioxus is **extraordinarily** minimal... sure, there may be some overhead but on an order of magnitude lower than the time required to actually update the page.
+
+
+### Isn't the overhead for interacting with the DOM from WASM too much?
+----
+The overhead layer between WASM and JS APIs is extremely poorly understood. Rust web benchmarks typically suffer from differences in how Rust and JS cache strings. In Dioxus, we solve most of these issues and our JS Framework Benchmark actually beats the WASM Bindgen benchmark in many cases. Compared to a "pure vanilla JS" solution, Dioxus adds less than 5% of overhead and takes advantage of batched DOM manipulation.
+
+### Aren't WASM binaries too huge to deploy in production?
+----
+WASM binary sizes are another poorly understood characteristic of Rust web apps. 50kb of WASM and 50kb of JS are not made equally. In JS, the code must be downloaded _first_ and _then_ JIT-ted. Just-in-time compiling 50kb of JavaScript takes some time which is why 50kb of JavaScript sounds like a lot! However, with WASM, the code is downloaded and JIT-ted _simultaneously_ through the magic of streaming compilation. By the time the 50kb of Rust is finished downloading, it is already ready to go. Again, Dioxus beats out many benchmarks with time-to-interactivity.
+
+For reference, Dioxus `hello-world` gzipped clocks in at around 60kb.
+
+### Why hooks? Why not MVC, classes, traits, messages, etc?
+----
+There are plenty Rust Elm-like frameworks in the world - we were not interested in making another! Instead, we borrowed hooks from React. JS and Rust share many structural similarities, so if you're comfortable with React, then you'll be plenty comfortable with Dioxus.
+
+### Why a custom DSL? Why not just pure function calls?
+----
+The `RSX` DSL is _barely_ a DSL. Rustaceans will find the DSL very similar to simply assembling nested structs, but without the syntactical overhead of "Default" everywhere or having to jump through hoops with the builder pattern. Between RSX, HTML, the Raw Factory API, and the NodeBuilder syntax, there's plenty of options to choose from.
+
+### What are the build times like? Why on earth would I choose Rust instead of JS/TS/Elm?
+---
+Dioxus builds as roughly as fast as a complex WebPack-TypeScript site. Compile times will be slower than an equivalent TypeScript site, but not unbearably slow. The WASM compiler backend for Rust is very fast. Iterating on small components is basically instant and larger apps takes a few seconds. In practice, the compiler guarantees of Rust balance out the rebuild times.
+
+### What about Yew/Seed/Sycamore/Dominator/Dodrio/Percy?
+----
+- Yew and Seed use an Elm-like pattern and don't support SSR or any alternate rendering platforms
+- Sycamore and Dominator are more like SolidJS/Svelte, requiring no VDOM but has less naturally-Rusty state management
+- Percy isn't quite mature yet
+- Dodrio is the spiritual predecessor of Dioxus, but is currently an archived research project without the batteries of Dioxus
+
+### How do the mobile and desktop renderers work? Is it Electron?
+----
+Currently, Dioxus uses your device's native WebView library to draw the page. None of your app code is actually running in the WebView thread, so you can access system resources instead of having to go through something like NodeJS. This means your app will use Safari on macOS/iOS, Edge (Chromium) on Windows, and whatever is the default Web Browser for Linux and Android. Because your code is compiled and running natively, performance is not a problem. You will have to use the various "Escape Hatches" to use browser-native APIs (like WebGL) and work around visual differences in how Safari and Chrome render the page.
+
+In the future, we are interested in using Webrenderer to provide a fully native renderer without having to go through the system WebView library. In practice, Dioxus mobile and desktop are great for CRUD-style apps, but the ergonomic cross-platform APIs (GPS, Camera, etc) are not there yet.
+
+### Why NOT Dioxus?
+----
+You shouldn't use Dioxus if:
+- You don't like the React Hooks approach to frontend
+- You need a no-std renderer
+- You want to support browsers where WASM or asm.js are not supported.
+

+ 1 - 1
packages/core/.vscode/settings.json

@@ -1,3 +1,3 @@
 {
-  "rust-analyzer.inlayHints.enable": false
+  "rust-analyzer.inlayHints.enable": true
 }

+ 27 - 20
packages/core/src/diff.rs

@@ -216,11 +216,6 @@ impl<'bump> DiffMachine<'bump> {
             MountType::Absorb => {
                 self.stack.add_child_count(nodes_created);
             }
-            MountType::Append => {
-                self.mutations.edits.push(AppendChildren {
-                    many: nodes_created as u32,
-                });
-            }
 
             MountType::Replace { old } => {
                 if let Some(old_id) = old.try_mounted_id() {
@@ -240,6 +235,12 @@ impl<'bump> DiffMachine<'bump> {
                 }
             }
 
+            MountType::Append => {
+                self.mutations.edits.push(AppendChildren {
+                    many: nodes_created as u32,
+                });
+            }
+
             MountType::InsertAfter { other_node } => {
                 let root = self.find_last_element(other_node).unwrap();
                 self.mutations.insert_after(root, nodes_created as u32);
@@ -258,24 +259,24 @@ impl<'bump> DiffMachine<'bump> {
 
     fn create_node(&mut self, node: &'bump VNode<'bump>) {
         match node {
-            VNode::Text(vtext) => self.create_text_node(vtext),
-            VNode::Suspended(suspended) => self.create_suspended_node(suspended),
-            VNode::Anchor(anchor) => self.create_anchor_node(anchor),
-            VNode::Element(element) => self.create_element_node(element),
+            VNode::Text(vtext) => self.create_text_node(vtext, node),
+            VNode::Suspended(suspended) => self.create_suspended_node(suspended, node),
+            VNode::Anchor(anchor) => self.create_anchor_node(anchor, node),
+            VNode::Element(element) => self.create_element_node(element, node),
             VNode::Fragment(frag) => self.create_fragment_node(frag),
             VNode::Component(component) => self.create_component_node(component),
         }
     }
 
-    fn create_text_node(&mut self, vtext: &'bump VText<'bump>) {
-        let real_id = self.vdom.reserve_node();
+    fn create_text_node(&mut self, vtext: &'bump VText<'bump>, node: &'bump VNode<'bump>) {
+        let real_id = self.vdom.reserve_node(node);
         self.mutations.create_text_node(vtext.text, real_id);
         vtext.dom_id.set(Some(real_id));
         self.stack.add_child_count(1);
     }
 
-    fn create_suspended_node(&mut self, suspended: &'bump VSuspended) {
-        let real_id = self.vdom.reserve_node();
+    fn create_suspended_node(&mut self, suspended: &'bump VSuspended, node: &'bump VNode<'bump>) {
+        let real_id = self.vdom.reserve_node(node);
         self.mutations.create_placeholder(real_id);
 
         suspended.dom_id.set(Some(real_id));
@@ -284,14 +285,14 @@ impl<'bump> DiffMachine<'bump> {
         self.attach_suspended_node_to_scope(suspended);
     }
 
-    fn create_anchor_node(&mut self, anchor: &'bump VAnchor) {
-        let real_id = self.vdom.reserve_node();
+    fn create_anchor_node(&mut self, anchor: &'bump VAnchor, node: &'bump VNode<'bump>) {
+        let real_id = self.vdom.reserve_node(node);
         self.mutations.create_placeholder(real_id);
         anchor.dom_id.set(Some(real_id));
         self.stack.add_child_count(1);
     }
 
-    fn create_element_node(&mut self, element: &'bump VElement<'bump>) {
+    fn create_element_node(&mut self, element: &'bump VElement<'bump>, node: &'bump VNode<'bump>) {
         let VElement {
             tag_name,
             listeners,
@@ -302,7 +303,8 @@ impl<'bump> DiffMachine<'bump> {
             ..
         } = element;
 
-        let real_id = self.vdom.reserve_node();
+        let real_id = self.vdom.reserve_node(node);
+
         dom_id.set(Some(real_id));
 
         self.mutations.create_element(tag_name, *namespace, real_id);
@@ -398,7 +400,7 @@ impl<'bump> DiffMachine<'bump> {
             (Fragment(old), Fragment(new)) => self.diff_fragment_nodes(old, new),
             (Anchor(old), Anchor(new)) => new.dom_id.set(old.dom_id.get()),
             (Suspended(old), Suspended(new)) => self.diff_suspended_nodes(old, new),
-            (Element(old), Element(new)) => self.diff_element_nodes(old, new),
+            (Element(old), Element(new)) => self.diff_element_nodes(old, new, new_node),
 
             // Anything else is just a basic replace and create
             (
@@ -422,7 +424,12 @@ impl<'bump> DiffMachine<'bump> {
         }
     }
 
-    fn diff_element_nodes(&mut self, old: &'bump VElement<'bump>, new: &'bump VElement<'bump>) {
+    fn diff_element_nodes(
+        &mut self,
+        old: &'bump VElement<'bump>,
+        new: &'bump VElement<'bump>,
+        new_node: &'bump VNode<'bump>,
+    ) {
         let root = old.dom_id.get();
 
         // If the element type is completely different, the element needs to be re-rendered completely
@@ -438,7 +445,7 @@ impl<'bump> DiffMachine<'bump> {
                     el: old.dom_id.get(),
                 },
             });
-            self.create_element_node(new);
+            self.create_element_node(new, new_node);
             return;
         }
 

+ 6 - 0
packages/core/src/diff_stack.rs

@@ -39,6 +39,7 @@ pub(crate) struct DiffStack<'bump> {
     instructions: Vec<DiffInstruction<'bump>>,
     nodes_created_stack: SmallVec<[usize; 10]>,
     pub scope_stack: SmallVec<[ScopeId; 5]>,
+    pub element_id_stack: SmallVec<[ElementId; 5]>,
 }
 
 impl<'bump> DiffStack<'bump> {
@@ -47,6 +48,7 @@ impl<'bump> DiffStack<'bump> {
             instructions: Vec::with_capacity(1000),
             nodes_created_stack: smallvec![],
             scope_stack: smallvec![],
+            element_id_stack: smallvec![],
         }
     }
 
@@ -80,6 +82,10 @@ impl<'bump> DiffStack<'bump> {
         self.nodes_created_stack.push(count);
     }
 
+    pub fn push_element_id(&mut self, id: ElementId) {
+        self.element_id_stack.push(id);
+    }
+
     pub fn create_node(&mut self, node: &'bump VNode<'bump>, and: MountType<'bump>) {
         self.nodes_created_stack.push(0);
         self.instructions.push(DiffInstruction::Mount { and });

+ 112 - 0
packages/core/src/events.rs

@@ -76,6 +76,118 @@ impl std::fmt::Debug for SyntheticEvent {
     }
 }
 
+/// Priority of Event Triggers.
+///
+/// Internally, Dioxus will abort work that's taking too long if new, more important, work arrives. Unlike React, Dioxus
+/// won't be afraid to pause work or flush changes to the RealDOM. This is called "cooperative scheduling". Some Renderers
+/// implement this form of scheduling internally, however Dioxus will perform its own scheduling as well.
+///
+/// The ultimate goal of the scheduler is to manage latency of changes, prioritizing "flashier" changes over "subtler" changes.
+///
+/// React has a 5-tier priority system. However, they break things into "Continuous" and "Discrete" priority. For now,
+/// we keep it simple, and just use a 3-tier priority system.
+///
+/// - NoPriority = 0
+/// - LowPriority = 1
+/// - NormalPriority = 2
+/// - UserBlocking = 3
+/// - HighPriority = 4
+/// - ImmediatePriority = 5
+///
+/// We still have a concept of discrete vs continuous though - discrete events won't be batched, but continuous events will.
+/// This means that multiple "scroll" events will be processed in a single frame, but multiple "click" events will be
+/// flushed before proceeding. Multiple discrete events is highly unlikely, though.
+#[derive(Debug, PartialEq, Eq, Clone, Copy, Hash, PartialOrd, Ord)]
+pub enum EventPriority {
+    /// Work that must be completed during the EventHandler phase.
+    ///
+    /// Currently this is reserved for controlled inputs.
+    Immediate = 3,
+
+    /// "High Priority" work will not interrupt other high priority work, but will interrupt medium and low priority work.
+    ///
+    /// This is typically reserved for things like user interaction.
+    ///
+    /// React calls these "discrete" events, but with an extra category of "user-blocking" (Immediate).
+    High = 2,
+
+    /// "Medium priority" work is generated by page events not triggered by the user. These types of events are less important
+    /// than "High Priority" events and will take presedence over low priority events.
+    ///
+    /// This is typically reserved for VirtualEvents that are not related to keyboard or mouse input.
+    ///
+    /// React calls these "continuous" events (e.g. mouse move, mouse wheel, touch move, etc).
+    Medium = 1,
+
+    /// "Low Priority" work will always be pre-empted unless the work is significantly delayed, in which case it will be
+    /// advanced to the front of the work queue until completed.
+    ///
+    /// The primary user of Low Priority work is the asynchronous work system (suspense).
+    ///
+    /// This is considered "idle" work or "background" work.
+    Low = 0,
+}
+
+pub(crate) fn event_meta(event: &UserEvent) -> (bool, EventPriority) {
+    use EventPriority::*;
+
+    match event.name {
+        // clipboard
+        "copy" | "cut" | "paste" => (true, Medium),
+
+        // Composition
+        "compositionend" | "compositionstart" | "compositionupdate" => (true, Low),
+
+        // Keyboard
+        "keydown" | "keypress" | "keyup" => (true, High),
+
+        // Focus
+        "focus" | "blur" => (true, Low),
+
+        // Form
+        "change" | "input" | "invalid" | "reset" | "submit" => (true, Medium),
+
+        // Mouse
+        "click" | "contextmenu" | "doubleclick" | "drag" | "dragend" | "dragenter" | "dragexit"
+        | "dragleave" | "dragover" | "dragstart" | "drop" | "mousedown" | "mouseenter"
+        | "mouseleave" | "mouseout" | "mouseover" | "mouseup" => (true, High),
+
+        "mousemove" => (false, Medium),
+
+        // Pointer
+        "pointerdown" | "pointermove" | "pointerup" | "pointercancel" | "gotpointercapture"
+        | "lostpointercapture" | "pointerenter" | "pointerleave" | "pointerover" | "pointerout" => {
+            (true, Medium)
+        }
+
+        // Selection
+        "select" | "touchcancel" | "touchend" => (true, Medium),
+
+        // Touch
+        "touchmove" | "touchstart" => (true, Medium),
+
+        // Wheel
+        "scroll" | "wheel" => (false, Medium),
+
+        // Media
+        "abort" | "canplay" | "canplaythrough" | "durationchange" | "emptied" | "encrypted"
+        | "ended" | "error" | "loadeddata" | "loadedmetadata" | "loadstart" | "pause" | "play"
+        | "playing" | "progress" | "ratechange" | "seeked" | "seeking" | "stalled" | "suspend"
+        | "timeupdate" | "volumechange" | "waiting" => (true, Medium),
+
+        // Animation
+        "animationstart" | "animationend" | "animationiteration" => (true, Medium),
+
+        // Transition
+        "transitionend" => (true, Medium),
+
+        // Toggle
+        "toggle" => (true, Medium),
+
+        _ => (true, Low),
+    }
+}
+
 pub mod on {
     //! This module defines the synthetic events that all Dioxus apps enable. No matter the platform, every dioxus renderer
     //! will implement the same events and same behavior (bubbling, cancelation, etc).

+ 5 - 1
packages/core/src/lib.rs

@@ -24,8 +24,10 @@ pub mod hooklist;
 pub mod hooks;
 pub mod mutations;
 pub mod nodes;
+pub mod resources;
 pub mod scheduler;
 pub mod scope;
+pub mod tasks;
 pub mod test_dom;
 pub mod util;
 pub mod virtual_dom;
@@ -43,8 +45,10 @@ pub(crate) mod innerlude {
     pub use crate::hooks::*;
     pub use crate::mutations::*;
     pub use crate::nodes::*;
+    pub(crate) use crate::resources::*;
     pub use crate::scheduler::*;
     pub use crate::scope::*;
+    pub use crate::tasks::*;
     pub use crate::test_dom::*;
     pub use crate::util::*;
     pub use crate::virtual_dom::*;
@@ -58,7 +62,7 @@ pub(crate) mod innerlude {
 pub use crate::innerlude::{
     format_args_f, html, rsx, Context, DioxusElement, DomEdit, DomTree, ElementId, EventPriority,
     LazyNodes, MountType, Mutations, NodeFactory, Properties, ScopeId, SuspendedContext,
-    SyntheticEvent, TestDom, UserEvent, VNode, VirtualDom, FC,
+    SyntheticEvent, TaskHandle, TestDom, UserEvent, VNode, VirtualDom, FC,
 };
 
 pub mod prelude {

+ 3 - 0
packages/core/src/nodes.rs

@@ -188,6 +188,8 @@ pub struct VElement<'a> {
 
     pub dom_id: Cell<Option<ElementId>>,
 
+    pub parent_id: Cell<Option<ElementId>>,
+
     pub listeners: &'a [Listener<'a>],
 
     pub attributes: &'a [Attribute<'a>],
@@ -415,6 +417,7 @@ impl<'a> NodeFactory<'a> {
             attributes,
             children,
             dom_id: empty_cell(),
+            parent_id: empty_cell(),
         }))
     }
 

+ 89 - 0
packages/core/src/resources.rs

@@ -0,0 +1,89 @@
+use crate::innerlude::*;
+use slab::Slab;
+
+use std::{cell::UnsafeCell, rc::Rc};
+#[derive(Clone)]
+pub(crate) struct ResourcePool {
+    /*
+    This *has* to be an UnsafeCell.
+
+    Each BumpFrame and Scope is located in this Slab - and we'll need mutable access to a scope while holding on to
+    its bumpframe conents immutably.
+
+    However, all of the interaction with this Slab is done in this module and the Diff module, so it should be fairly
+    simple to audit.
+
+    Wrapped in Rc so the "get_shared_context" closure can walk the tree (immutably!)
+    */
+    pub components: Rc<UnsafeCell<Slab<Scope>>>,
+
+    /*
+    Yes, a slab of "nil". We use this for properly ordering ElementIDs - all we care about is the allocation strategy
+    that slab uses. The slab essentially just provides keys for ElementIDs that we can re-use in a Vec on the client.
+
+    This just happened to be the simplest and most efficient way to implement a deterministic keyed map with slot reuse.
+
+    In the future, we could actually store a pointer to the VNode instead of nil to provide O(1) lookup for VNodes...
+    */
+    pub raw_elements: Rc<UnsafeCell<Slab<*const VNode<'static>>>>,
+
+    pub channel: EventChannel,
+}
+
+impl ResourcePool {
+    /// this is unsafe because the caller needs to track which other scopes it's already using
+    pub fn get_scope(&self, idx: ScopeId) -> Option<&Scope> {
+        let inner = unsafe { &*self.components.get() };
+        inner.get(idx.0)
+    }
+
+    /// this is unsafe because the caller needs to track which other scopes it's already using
+    pub fn get_scope_mut(&self, idx: ScopeId) -> Option<&mut Scope> {
+        let inner = unsafe { &mut *self.components.get() };
+        inner.get_mut(idx.0)
+    }
+
+    // return a bumpframe with a lifetime attached to the arena borrow
+    // this is useful for merging lifetimes
+    pub fn with_scope_vnode<'b>(
+        &self,
+        _id: ScopeId,
+        _f: impl FnOnce(&mut Scope) -> &VNode<'b>,
+    ) -> Option<&VNode<'b>> {
+        todo!()
+    }
+
+    pub fn try_remove(&self, id: ScopeId) -> Option<Scope> {
+        let inner = unsafe { &mut *self.components.get() };
+        Some(inner.remove(id.0))
+        // .try_remove(id.0)
+        // .ok_or_else(|| Error::FatalInternal("Scope not found"))
+    }
+
+    pub fn reserve_node<'a>(&self, node: &'a VNode<'a>) -> ElementId {
+        let els = unsafe { &mut *self.raw_elements.get() };
+        let entry = els.vacant_entry();
+        let key = entry.key();
+        let id = ElementId(key);
+        let node = node as *const _;
+        let node = unsafe { std::mem::transmute(node) };
+        entry.insert(node);
+        id
+    }
+
+    /// return the id, freeing the space of the original node
+    pub fn collect_garbage(&self, id: ElementId) {
+        todo!("garabge collection currently WIP")
+        // self.raw_elements.remove(id.0);
+    }
+
+    pub fn insert_scope_with_key(&self, f: impl FnOnce(ScopeId) -> Scope) -> ScopeId {
+        let g = unsafe { &mut *self.components.get() };
+        let entry = g.vacant_entry();
+        let id = ScopeId(entry.key());
+        entry.insert(f(id));
+        id
+    }
+
+    pub fn borrow_bumpframe(&self) {}
+}

+ 21 - 341
packages/core/src/scheduler.rs

@@ -71,18 +71,14 @@ For the rest, we defer to the rIC period and work down each queue from high to l
 use crate::heuristics::*;
 use crate::innerlude::*;
 use futures_channel::mpsc::{UnboundedReceiver, UnboundedSender};
-use futures_util::stream::FuturesUnordered;
-use futures_util::{future::FusedFuture, pin_mut, Future, FutureExt, StreamExt};
-use fxhash::{FxHashMap, FxHashSet};
+use futures_util::{pin_mut, stream::FuturesUnordered, Future, FutureExt, StreamExt};
+use fxhash::FxHashSet;
 use indexmap::IndexSet;
 use slab::Slab;
-use smallvec::SmallVec;
 use std::{
     any::{Any, TypeId},
-    cell::{Cell, RefCell, RefMut, UnsafeCell},
-    collections::{BTreeMap, BTreeSet, BinaryHeap, HashMap, HashSet, VecDeque},
-    fmt::Display,
-    pin::Pin,
+    cell::{Cell, UnsafeCell},
+    collections::{HashSet, VecDeque},
     rc::Rc,
 };
 
@@ -142,9 +138,8 @@ pub(crate) struct Scheduler {
     // In-flight futures
     pub async_tasks: FuturesUnordered<FiberTask>,
 
-    // scheduler stuff
-    pub current_priority: EventPriority,
-
+    // // scheduler stuff
+    // pub current_priority: EventPriority,
     pub ui_events: VecDeque<UserEvent>,
 
     pub pending_immediates: VecDeque<ScopeId>,
@@ -155,7 +150,7 @@ pub(crate) struct Scheduler {
 
     pub garbage_scopes: HashSet<ScopeId>,
 
-    pub lanes: [PriorityLane; 4],
+    pub lane: PriorityLane,
 }
 
 impl Scheduler {
@@ -239,15 +234,8 @@ impl Scheduler {
 
             garbage_scopes: HashSet::new(),
 
-            current_priority: EventPriority::Low,
-
-            // sorted high to low by priority (0 = immediate, 3 = low)
-            lanes: [
-                PriorityLane::new(),
-                PriorityLane::new(),
-                PriorityLane::new(),
-                PriorityLane::new(),
-            ],
+            // current_priority: EventPriority::Low,
+            lane: PriorityLane::new(),
         }
     }
 
@@ -296,25 +284,6 @@ impl Scheduler {
     }
 
     fn prepare_work(&mut self) {
-        // consume all events that are "continuous" to be batched
-        // if we run into a discrete event, then bail early
-
-        self.current_priority = match (
-            self.lanes[0].has_work(),
-            self.lanes[1].has_work(),
-            self.lanes[2].has_work(),
-            self.lanes[3].has_work(),
-        ) {
-            (true, _, _, _) => EventPriority::Immediate,
-            (false, true, _, _) => EventPriority::High,
-            (false, false, true, _) => EventPriority::Medium,
-            (false, false, false, true) => EventPriority::Low,
-            (false, false, false, false) => {
-                // no work to do, process events
-                EventPriority::Low
-            }
-        };
-
         while let Some(trigger) = self.ui_events.pop_back() {
             if let Some(scope) = self.pool.get_scope_mut(trigger.scope) {}
         }
@@ -322,8 +291,7 @@ impl Scheduler {
 
     // nothing to do, no events on channels, no work
     pub fn has_any_work(&self) -> bool {
-        let pending_lanes = self.lanes.iter().find(|f| f.has_work()).is_some();
-        pending_lanes || self.has_pending_events()
+        self.lane.has_work() || self.has_pending_events()
     }
 
     pub fn has_pending_events(&self) -> bool {
@@ -333,36 +301,19 @@ impl Scheduler {
     /// re-balance the work lanes, ensuring high-priority work properly bumps away low priority work
     fn balance_lanes(&mut self) {}
 
-    fn load_current_lane(&mut self) -> &mut PriorityLane {
-        match self.current_priority {
-            EventPriority::Immediate => &mut self.lanes[0],
-            EventPriority::High => &mut self.lanes[1],
-            EventPriority::Medium => &mut self.lanes[2],
-            EventPriority::Low => &mut self.lanes[3],
-        }
-    }
-
     fn save_work(&mut self, lane: SavedDiffWork) {
         let saved: SavedDiffWork<'static> = unsafe { std::mem::transmute(lane) };
-        self.load_current_lane().saved_state = Some(saved);
+        self.lane.saved_state = Some(saved);
     }
 
     fn load_work(&mut self) -> SavedDiffWork<'static> {
-        match self.current_priority {
-            EventPriority::Immediate => todo!(),
-            EventPriority::High => todo!(),
-            EventPriority::Medium => todo!(),
-            EventPriority::Low => todo!(),
-        }
-    }
-
-    pub fn current_lane(&mut self) -> &mut PriorityLane {
-        match self.current_priority {
-            EventPriority::Immediate => &mut self.lanes[0],
-            EventPriority::High => &mut self.lanes[1],
-            EventPriority::Medium => &mut self.lanes[2],
-            EventPriority::Low => &mut self.lanes[3],
-        }
+        // match self.current_priority {
+        //     EventPriority::Immediate => todo!(),
+        //     EventPriority::High => todo!(),
+        //     EventPriority::Medium => todo!(),
+        //     EventPriority::Low => todo!(),
+        // }
+        unsafe { self.lane.saved_state.take().unwrap().extend() }
     }
 
     pub fn handle_channel_msg(&mut self, msg: SchedulerMsg) {
@@ -384,17 +335,6 @@ impl Scheduler {
         }
     }
 
-    fn add_dirty_scope(&mut self, scope: ScopeId, priority: EventPriority) {
-        todo!()
-        // match priority {
-        //     EventPriority::High => self.high_priorty.dirty_scopes.insert(scope),
-        //     EventPriority::Medium => self.medium_priority.dirty_scopes.insert(scope),
-        //     EventPriority::Low => self.low_priority.dirty_scopes.insert(scope),
-        // };
-    }
-
-    async fn wait_for_any_work(&mut self) {}
-
     /// Load the current lane, and work on it, periodically checking in if the deadline has been reached.
     ///
     /// Returns true if the lane is finished before the deadline could be met.
@@ -414,13 +354,13 @@ impl Scheduler {
         if machine.stack.is_empty() {
             let shared = self.pool.clone();
 
-            self.current_lane().dirty_scopes.sort_by(|a, b| {
+            self.lane.dirty_scopes.sort_by(|a, b| {
                 let h1 = shared.get_scope(*a).unwrap().height;
                 let h2 = shared.get_scope(*b).unwrap().height;
                 h1.cmp(&h2)
             });
 
-            if let Some(scope) = self.current_lane().dirty_scopes.pop() {
+            if let Some(scope) = self.lane.dirty_scopes.pop() {
                 let component = self.pool.get_scope(scope).unwrap();
                 let (old, new) = (component.frames.wip_head(), component.frames.fin_head());
                 machine.stack.push(DiffInstruction::Diff { new, old });
@@ -437,7 +377,7 @@ impl Scheduler {
             false
         } else {
             for node in saved.seen_scopes.drain() {
-                self.current_lane().dirty_scopes.remove(&node);
+                self.lane.dirty_scopes.remove(&node);
             }
 
             let mut new_mutations = Mutations::new();
@@ -594,8 +534,6 @@ impl Scheduler {
     }
 }
 
-impl Scheduler {}
-
 pub(crate) struct PriorityLane {
     pub dirty_scopes: IndexSet<ScopeId>,
     pub saved_state: Option<SavedDiffWork<'static>>,
@@ -614,262 +552,4 @@ impl PriorityLane {
     fn has_work(&self) -> bool {
         self.dirty_scopes.len() > 0 || self.in_progress == true
     }
-
-    fn work(&mut self) {
-        let scope = self.dirty_scopes.pop();
-    }
-}
-
-pub struct TaskHandle {
-    pub(crate) sender: UnboundedSender<SchedulerMsg>,
-    pub(crate) our_id: u64,
-}
-
-impl TaskHandle {
-    /// Toggles this coroutine off/on.
-    ///
-    /// This method is not synchronous - your task will not stop immediately.
-    pub fn toggle(&self) {
-        self.sender
-            .unbounded_send(SchedulerMsg::ToggleTask(self.our_id))
-            .unwrap()
-    }
-
-    /// This method is not synchronous - your task will not stop immediately.
-    pub fn resume(&self) {
-        self.sender
-            .unbounded_send(SchedulerMsg::ResumeTask(self.our_id))
-            .unwrap()
-    }
-
-    /// This method is not synchronous - your task will not stop immediately.
-    pub fn stop(&self) {
-        self.sender
-            .unbounded_send(SchedulerMsg::ToggleTask(self.our_id))
-            .unwrap()
-    }
-
-    /// This method is not synchronous - your task will not stop immediately.
-    pub fn restart(&self) {
-        self.sender
-            .unbounded_send(SchedulerMsg::ToggleTask(self.our_id))
-            .unwrap()
-    }
-}
-
-/// A component's unique identifier.
-///
-/// `ScopeId` is a `usize` that is unique across the entire VirtualDOM - but not unique across time. If a component is
-/// unmounted, then the `ScopeId` will be reused for a new component.
-#[derive(serde::Serialize, serde::Deserialize, Copy, Clone, PartialEq, Eq, Hash, Debug)]
-pub struct ScopeId(pub usize);
-
-/// An Element's unique identifier.
-///
-/// `ElementId` is a `usize` that is unique across the entire VirtualDOM - but not unique across time. If a component is
-/// unmounted, then the `ElementId` will be reused for a new component.
-#[derive(Copy, Clone, PartialEq, Eq, Hash, Debug)]
-pub struct ElementId(pub usize);
-impl Display for ElementId {
-    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
-        write!(f, "{}", self.0)
-    }
-}
-
-impl ElementId {
-    pub fn as_u64(self) -> u64 {
-        self.0 as u64
-    }
-}
-
-/// Priority of Event Triggers.
-///
-/// Internally, Dioxus will abort work that's taking too long if new, more important, work arrives. Unlike React, Dioxus
-/// won't be afraid to pause work or flush changes to the RealDOM. This is called "cooperative scheduling". Some Renderers
-/// implement this form of scheduling internally, however Dioxus will perform its own scheduling as well.
-///
-/// The ultimate goal of the scheduler is to manage latency of changes, prioritizing "flashier" changes over "subtler" changes.
-///
-/// React has a 5-tier priority system. However, they break things into "Continuous" and "Discrete" priority. For now,
-/// we keep it simple, and just use a 3-tier priority system.
-///
-/// - NoPriority = 0
-/// - LowPriority = 1
-/// - NormalPriority = 2
-/// - UserBlocking = 3
-/// - HighPriority = 4
-/// - ImmediatePriority = 5
-///
-/// We still have a concept of discrete vs continuous though - discrete events won't be batched, but continuous events will.
-/// This means that multiple "scroll" events will be processed in a single frame, but multiple "click" events will be
-/// flushed before proceeding. Multiple discrete events is highly unlikely, though.
-#[derive(Debug, PartialEq, Eq, Clone, Copy, Hash, PartialOrd, Ord)]
-pub enum EventPriority {
-    /// Work that must be completed during the EventHandler phase.
-    ///
-    /// Currently this is reserved for controlled inputs.
-    Immediate = 3,
-
-    /// "High Priority" work will not interrupt other high priority work, but will interrupt medium and low priority work.
-    ///
-    /// This is typically reserved for things like user interaction.
-    ///
-    /// React calls these "discrete" events, but with an extra category of "user-blocking" (Immediate).
-    High = 2,
-
-    /// "Medium priority" work is generated by page events not triggered by the user. These types of events are less important
-    /// than "High Priority" events and will take presedence over low priority events.
-    ///
-    /// This is typically reserved for VirtualEvents that are not related to keyboard or mouse input.
-    ///
-    /// React calls these "continuous" events (e.g. mouse move, mouse wheel, touch move, etc).
-    Medium = 1,
-
-    /// "Low Priority" work will always be pre-empted unless the work is significantly delayed, in which case it will be
-    /// advanced to the front of the work queue until completed.
-    ///
-    /// The primary user of Low Priority work is the asynchronous work system (suspense).
-    ///
-    /// This is considered "idle" work or "background" work.
-    Low = 0,
-}
-
-#[derive(Clone)]
-pub(crate) struct ResourcePool {
-    /*
-    This *has* to be an UnsafeCell.
-
-    Each BumpFrame and Scope is located in this Slab - and we'll need mutable access to a scope while holding on to
-    its bumpframe conents immutably.
-
-    However, all of the interaction with this Slab is done in this module and the Diff module, so it should be fairly
-    simple to audit.
-
-    Wrapped in Rc so the "get_shared_context" closure can walk the tree (immutably!)
-    */
-    pub components: Rc<UnsafeCell<Slab<Scope>>>,
-
-    /*
-    Yes, a slab of "nil". We use this for properly ordering ElementIDs - all we care about is the allocation strategy
-    that slab uses. The slab essentially just provides keys for ElementIDs that we can re-use in a Vec on the client.
-
-    This just happened to be the simplest and most efficient way to implement a deterministic keyed map with slot reuse.
-
-    In the future, we could actually store a pointer to the VNode instead of nil to provide O(1) lookup for VNodes...
-    */
-    pub raw_elements: Rc<UnsafeCell<Slab<()>>>,
-
-    pub channel: EventChannel,
-}
-
-impl ResourcePool {
-    /// this is unsafe because the caller needs to track which other scopes it's already using
-    pub fn get_scope(&self, idx: ScopeId) -> Option<&Scope> {
-        let inner = unsafe { &*self.components.get() };
-        inner.get(idx.0)
-    }
-
-    /// this is unsafe because the caller needs to track which other scopes it's already using
-    pub fn get_scope_mut(&self, idx: ScopeId) -> Option<&mut Scope> {
-        let inner = unsafe { &mut *self.components.get() };
-        inner.get_mut(idx.0)
-    }
-
-    // return a bumpframe with a lifetime attached to the arena borrow
-    // this is useful for merging lifetimes
-    pub fn with_scope_vnode<'b>(
-        &self,
-        _id: ScopeId,
-        _f: impl FnOnce(&mut Scope) -> &VNode<'b>,
-    ) -> Option<&VNode<'b>> {
-        todo!()
-    }
-
-    pub fn try_remove(&self, id: ScopeId) -> Option<Scope> {
-        let inner = unsafe { &mut *self.components.get() };
-        Some(inner.remove(id.0))
-        // .try_remove(id.0)
-        // .ok_or_else(|| Error::FatalInternal("Scope not found"))
-    }
-
-    pub fn reserve_node(&self) -> ElementId {
-        let els = unsafe { &mut *self.raw_elements.get() };
-        ElementId(els.insert(()))
-    }
-
-    /// return the id, freeing the space of the original node
-    pub fn collect_garbage(&self, id: ElementId) {
-        todo!("garabge collection currently WIP")
-        // self.raw_elements.remove(id.0);
-    }
-
-    pub fn insert_scope_with_key(&self, f: impl FnOnce(ScopeId) -> Scope) -> ScopeId {
-        let g = unsafe { &mut *self.components.get() };
-        let entry = g.vacant_entry();
-        let id = ScopeId(entry.key());
-        entry.insert(f(id));
-        id
-    }
-
-    pub fn borrow_bumpframe(&self) {}
-}
-
-fn event_meta(event: &UserEvent) -> (bool, EventPriority) {
-    use EventPriority::*;
-
-    match event.name {
-        // clipboard
-        "copy" | "cut" | "paste" => (true, Medium),
-
-        // Composition
-        "compositionend" | "compositionstart" | "compositionupdate" => (true, Low),
-
-        // Keyboard
-        "keydown" | "keypress" | "keyup" => (true, High),
-
-        // Focus
-        "focus" | "blur" => (true, Low),
-
-        // Form
-        "change" | "input" | "invalid" | "reset" | "submit" => (true, Medium),
-
-        // Mouse
-        "click" | "contextmenu" | "doubleclick" | "drag" | "dragend" | "dragenter" | "dragexit"
-        | "dragleave" | "dragover" | "dragstart" | "drop" | "mousedown" | "mouseenter"
-        | "mouseleave" | "mouseout" | "mouseover" | "mouseup" => (true, High),
-
-        "mousemove" => (false, Medium),
-
-        // Pointer
-        "pointerdown" | "pointermove" | "pointerup" | "pointercancel" | "gotpointercapture"
-        | "lostpointercapture" | "pointerenter" | "pointerleave" | "pointerover" | "pointerout" => {
-            (true, Medium)
-        }
-
-        // Selection
-        "select" | "touchcancel" | "touchend" => (true, Medium),
-
-        // Touch
-        "touchmove" | "touchstart" => (true, Medium),
-
-        // Wheel
-        "scroll" | "wheel" => (false, Medium),
-
-        // Media
-        "abort" | "canplay" | "canplaythrough" | "durationchange" | "emptied" | "encrypted"
-        | "ended" | "error" | "loadeddata" | "loadedmetadata" | "loadstart" | "pause" | "play"
-        | "playing" | "progress" | "ratechange" | "seeked" | "seeking" | "stalled" | "suspend"
-        | "timeupdate" | "volumechange" | "waiting" => (true, Medium),
-
-        // Animation
-        "animationstart" | "animationend" | "animationiteration" => (true, Medium),
-
-        // Transition
-        "transitionend" => (true, Medium),
-
-        // Toggle
-        "toggle" => (true, Medium),
-
-        _ => (true, Low),
-    }
 }

+ 39 - 0
packages/core/src/tasks.rs

@@ -0,0 +1,39 @@
+use crate::innerlude::*;
+use futures_channel::mpsc::UnboundedSender;
+
+pub struct TaskHandle {
+    pub(crate) sender: UnboundedSender<SchedulerMsg>,
+    pub(crate) our_id: u64,
+}
+
+impl TaskHandle {
+    /// Toggles this coroutine off/on.
+    ///
+    /// This method is not synchronous - your task will not stop immediately.
+    pub fn toggle(&self) {
+        self.sender
+            .unbounded_send(SchedulerMsg::ToggleTask(self.our_id))
+            .unwrap()
+    }
+
+    /// This method is not synchronous - your task will not stop immediately.
+    pub fn resume(&self) {
+        self.sender
+            .unbounded_send(SchedulerMsg::ResumeTask(self.our_id))
+            .unwrap()
+    }
+
+    /// This method is not synchronous - your task will not stop immediately.
+    pub fn stop(&self) {
+        self.sender
+            .unbounded_send(SchedulerMsg::ToggleTask(self.our_id))
+            .unwrap()
+    }
+
+    /// This method is not synchronous - your task will not stop immediately.
+    pub fn restart(&self) {
+        self.sender
+            .unbounded_send(SchedulerMsg::ToggleTask(self.our_id))
+            .unwrap()
+    }
+}

+ 26 - 0
packages/core/src/util.rs

@@ -1,4 +1,5 @@
 use std::cell::Cell;
+use std::fmt::Display;
 
 use crate::innerlude::*;
 
@@ -64,3 +65,28 @@ impl Future for YieldNow {
         }
     }
 }
+
+/// A component's unique identifier.
+///
+/// `ScopeId` is a `usize` that is unique across the entire VirtualDOM - but not unique across time. If a component is
+/// unmounted, then the `ScopeId` will be reused for a new component.
+#[derive(serde::Serialize, serde::Deserialize, Copy, Clone, PartialEq, Eq, Hash, Debug)]
+pub struct ScopeId(pub usize);
+
+/// An Element's unique identifier.
+///
+/// `ElementId` is a `usize` that is unique across the entire VirtualDOM - but not unique across time. If a component is
+/// unmounted, then the `ElementId` will be reused for a new component.
+#[derive(Copy, Clone, PartialEq, Eq, Hash, Debug)]
+pub struct ElementId(pub usize);
+impl Display for ElementId {
+    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+        write!(f, "{}", self.0)
+    }
+}
+
+impl ElementId {
+    pub fn as_u64(self) -> u64 {
+        self.0 as u64
+    }
+}

+ 5 - 0
packages/core/src/virtual_dom.rs

@@ -380,3 +380,8 @@ impl std::fmt::Display for VirtualDom {
         renderer.render(self, root, f, 0)
     }
 }
+
+pub struct VirtualDomConfig {
+    component_slab_size: usize,
+    element_slab_size: usize,
+}