RavenDB 3.0 Keynote

Data & Analytics

oren-eini
  • RavenDB 3.0 Keynote Oren Eini ayende@ayende.com ayende.com/blog Hibernating Rhinos
  • About this conference  Twit with  Logistics
  • Free stuff  RavenDB Coupon: ravenconf2014  RavenDB In Action 45% discount: ravdb14cf
  •  Metrics beyond anything you’ve seen  Recent Microsoft Case Study at: http://goo.gl/GTgkv6  VIP Subscription for attendees:  codealike.com/VIP  Token: RavenConf2014  Get 3 months of Codealike Premium
  • History  Mid 2008 - Rhino Divan DB Started  Sep 2009 - RavenDB is created  May 2010 - RavenDB 1.0  Nov 2010 - RavenDB 1st production deployment  Jan 2013 - RavenDB 2.0  Jul 2013 - RavenDB 2.5  Aug 2013 – 1st RavenDB book it out  Apr 2014 -1st RavenDB Conference  Jul 2014 (est) – RavenDB 3.0 launch
  • RavenDB (ohloh)  40,100 commits  210 contributors  2,774,921 lines of code  estimated 797 years of effort (COCOMO model)  first commit in September, 2009  most recent commit about 13 hours ago
  • Jan 1st 2015  RavenDB becomes self aware
  • RavenDB 3.0  ~15 team members  Some parts were started in 2011  18 months of work (another 2/3 remaining)  More than 600 issues  Awesome
  • What did we do?!  Voron  OWIN / Web API  Indexing  Operations  RavenFS  JVM client API  Spit & polish  New studio
  • behold http://www.wizards.com/dnd/images/leof_gallery/86716.jpg
  • It’s not about the UI  Yes, important  Yes, we have ~8 people on it now  We’re a database  http://issues.hibernatingrhinos.com  Features, not cosmetics
  • Seriously, now…  What should you be excited about?  It isn't the feature, it is the direction…
  • Removing friction Indexing  Index deletes are async  Index ids  Small collection optimization  Fan out prevention Operations  No performance counters  Additional debug endpoints  Periodic backups bundle full/incremental & deletes  Explicit failover servers  Reduced # of assemblies  Server to server smuggling
  • Increasing access
  • Raven File system  Tailored persistence solution  In production since 2012  Replicated file system  Optimized change tracking  Very large files  Replacing attachments
  • Spit & polish  Preserving missing properties  Lazy async  Single pipeline (embedded / http)  Multiple database support for embedded  Everything on top of OWIN / Web API
  • Where are we now?  Stabilization  http://issues.hibernatingrhinos.com  Force new feature mode: On  No new features going in until release
  • Hi, what about Voron?!  Internal only  Not important
  • !!Not so fast!  Voron is very important  Impl. details after lunch…  Implications of Voron are:  We own the entire stack  Tailored solutions
  • Where are we going? Actual  Voron - distribution  Log shipping  Raft  Polyglot persistence solution  RavenFS is just the beginning  Event aggregations, the fallen feature Research  Project Corax  Project Tempus  Project Duco
  • Hackaton  After hours  Let us make something cool!  Full feature, from the disk to the UI
  • Questions?
Please download to view
22
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Description
The RavenDB 3.0 keynote, what is going on with RavenDB, and where are we going?
Text
  • RavenDB 3.0 Keynote Oren Eini ayende@ayende.com ayende.com/blog Hibernating Rhinos
  • About this conference  Twit with  Logistics
  • Free stuff  RavenDB Coupon: ravenconf2014  RavenDB In Action 45% discount: ravdb14cf
  •  Metrics beyond anything you’ve seen  Recent Microsoft Case Study at: http://goo.gl/GTgkv6  VIP Subscription for attendees:  codealike.com/VIP  Token: RavenConf2014  Get 3 months of Codealike Premium
  • History  Mid 2008 - Rhino Divan DB Started  Sep 2009 - RavenDB is created  May 2010 - RavenDB 1.0  Nov 2010 - RavenDB 1st production deployment  Jan 2013 - RavenDB 2.0  Jul 2013 - RavenDB 2.5  Aug 2013 – 1st RavenDB book it out  Apr 2014 -1st RavenDB Conference  Jul 2014 (est) – RavenDB 3.0 launch
  • RavenDB (ohloh)  40,100 commits  210 contributors  2,774,921 lines of code  estimated 797 years of effort (COCOMO model)  first commit in September, 2009  most recent commit about 13 hours ago
  • Jan 1st 2015  RavenDB becomes self aware
  • RavenDB 3.0  ~15 team members  Some parts were started in 2011  18 months of work (another 2/3 remaining)  More than 600 issues  Awesome
  • What did we do?!  Voron  OWIN / Web API  Indexing  Operations  RavenFS  JVM client API  Spit & polish  New studio
  • behold http://www.wizards.com/dnd/images/leof_gallery/86716.jpg
  • It’s not about the UI  Yes, important  Yes, we have ~8 people on it now  We’re a database  http://issues.hibernatingrhinos.com  Features, not cosmetics
  • Seriously, now…  What should you be excited about?  It isn't the feature, it is the direction…
  • Removing friction Indexing  Index deletes are async  Index ids  Small collection optimization  Fan out prevention Operations  No performance counters  Additional debug endpoints  Periodic backups bundle full/incremental & deletes  Explicit failover servers  Reduced # of assemblies  Server to server smuggling
  • Increasing access
  • Raven File system  Tailored persistence solution  In production since 2012  Replicated file system  Optimized change tracking  Very large files  Replacing attachments
  • Spit & polish  Preserving missing properties  Lazy async  Single pipeline (embedded / http)  Multiple database support for embedded  Everything on top of OWIN / Web API
  • Where are we now?  Stabilization  http://issues.hibernatingrhinos.com  Force new feature mode: On  No new features going in until release
  • Hi, what about Voron?!  Internal only  Not important
  • !!Not so fast!  Voron is very important  Impl. details after lunch…  Implications of Voron are:  We own the entire stack  Tailored solutions
  • Where are we going? Actual  Voron - distribution  Log shipping  Raft  Polyglot persistence solution  RavenFS is just the beginning  Event aggregations, the fallen feature Research  Project Corax  Project Tempus  Project Duco
  • Hackaton  After hours  Let us make something cool!  Full feature, from the disk to the UI
  • Questions?
Comments
Top