YOU CAN ATTEND DAY-1 OR DAY-2 OR BOTH! AND SELECTED WORKSHOPS. CREDIT CARD PAYMENT AVAILABLE.
It’s no longer about convincing management of the value of using data to create business value. The real question has now shifted to ensuring that the value is delivered in a sustainable way. Too many organizations still fail to actually get value from their data initiatives. What are the key elements that need to be put in place to ensure success? How do you move from a technology-centric to an integrated data strategy? How do we improve the data literacy of the stakeholders and ensure that the data products can be used effectively? With the regularity of the clock, we introduce new concepts such as data fabric and data mesh, where the question remains to what extent they solve problems or introduce new problems.
You will learn:
With the arrival of te new pension system in The Netherlands, APG is facing a serious challenge: How to convert the pension rights of millions of participants to the new pension scheme? Datamanagement plays an important role in this transition. Arjen Bouman offers a look behind the scenes at this huge operation and shares his experiences and learnings that he has experienced during this process.
Read lessThe interest in the meaning of data is increasing. Data lineage – the traceability of data to its meaning and the reason for which the data is used – is becoming a critical success factor. Additionally, the increasing variety of data calls for a grip on the individual data sources. The lack of available data specialists makes it necessary to make available knowledge explicit. The introduction of a distributed data architecture provides the final push to “clean up the attic of data”.
The processing of data is therefore not only a logistical challenge, but also requires a reliable approach to map the meaning of data. This approach goes beyond the traditional description of the structure of the data warehouse: a semantic approach is required.
This semantic approach takes the problem space as the starting point for the description: the domain for which data is acquired. An accurate analysis and model of the domain is the basis for a translation to a model of the data itself as it manifests in the solution space. The result can be seen as a knowledge graph: a network of connected (linked) data, including the definition of this data and the lineage to the basis for this data in legislation, compliance guidelines and company definitions.
Such an approach is not only relevant for the data warehouse: the result is an explicit, unambiguous recording of the knowledge about the relevant data in an organization. Marco Brattinga takes you into the world of enterprise semantic data management through the following topics:
As data scientists, our impact on the world is growing more significant every day. But what are the concrete steps we can make to become more responsible data scientists? In this session, we introduce you to the realm of responsible data science and how to embed ethics into your technology. Tanja Ubert and Gabriella Obispa will share their vision on how we need to include responsibility into our work with data. What questions do we need to ask, what responsibility do we, as specialists, have to take on collecting, using and implementing data solutions in our world?
Data architectures are becoming increasingly complex due to the need to serve many purposes: multiple personas, ranging from operational data users to data scientists need to have access to a variety of managed, governed data and demand real-time, self-service reporting and analytics. Applying principles while designing data architectures will help simplify the development and usage of those architectures by developers and end users. We apply the following principles:
In this session we will show how Connected Data Group and 2150 Datavault Builder work together in designing the simplified architecture by focusing on data modelling with Data Vault and automating the data engineering process with Datavault Builder.
During this session you will learn:
How to use the full benefits of Data Vault? Data Vault is the modeling approach to become agile in Data Warehousing. The Data Vault approach is unbeatable, especially when the technical implementation is abstracted through automation. Datavault Builder has combined its Data Vault driven Data Warehouse approach with a standardized development process that allows to scale and allocate development resources flexible.
Quickly develop your own Data Warehouse. Rely on the visual element of Datavault Builder to facilitate the collaboration between business users and IT for fully accepted and sustainable project outcomes. Immediately lay the foundation for new reports or integrate new sources of data in an agile way. Deliver new requirements and features with fully automated deployment. Agile Data Warehouse development and CI/CD become a reality.
Read lessHaving the right data in the right place at the right time with the right quality, is becoming increasingly important for supporting business decisions, optimizing, automating and powering AI models. Just like with software development, you want to deliver new functionalities with premium quality much faster. You don’t want to make new data, new insights, new AI models available to the user every month, but when it is ready for deployment. That is what DataOps can achieve in theory. But in practice one faces serious challenges that make it a lot more difficult to effectuate the DataOps process in an organization. For example, how to deal with development sandboxes and representative test data across systems.
In this session Niels Naglé en Vincent Goris will show what DataOps is and that it is not just DevOps for data. They will discuss the unique challenges, solutions for these challenges and their lessons learned.
We’ve all seen studies that showed the enormous amounts of data that are created on this planet every day. However, a large part of this data is not new but copied data. In existing data architectures, such as data warehouses, a lot of copying is taking place. But modern architectures, such as data lakes and data hubs, also rely heavily on copying data. This rampant copying must be reduced. We don’t always think about it, but copying data has many disadvantages, including higher data latency, complex forms of data synchronization, more complex data security and data privacy, higher development and maintenance costs, and degraded data quality. It is time to apply the data minimization principle when designing new data architectures. This means that the aim is to minimize copied data. In other words, users gain more access to original data and move from data-by-delivery to data-on-demand. The latter corresponds to what has happened in the movie industry: from collecting videos at a store to video-on-demand. In short, data minimization means that we are going to ‘Netflix’ our data.
The data warehouse is over thirty years old. The data lake just turned ten. So, is it time for something new? In fact, two new patterns have recently emerged—data fabric and data mesh—promising to revolutionise the delivery of BI and analytics.
Data fabric focuses on the automation of data delivery and discovery using artificial intelligence and active metadata. Data mesh has a very novel take on today’s problems, suggesting we must take a domain driven approach to development to eliminate centralised bottlenecks. Each approach has its supporters and detractors, but who is right? More importantly, should you be planning to replace your existing systems with one or the other?
In this session, Dr. Barry Devlin will explore what data fabric and mesh are, what they offer, and how they differ. We will compare them to existing patterns, such as data warehouse and data lake, data hub and even data lakehouse, using the Digital Information Systems Architecture (DISA) as a base. This will allow us to clearly see their strengths and weaknesses and understand when and how you might choose to move to one or the other.
What You Will Learn:
In this interactive session Lawrence Corr shares his thoughts and experiences on using visual collaboration platforms such as Miro and MURAL for gathering BI data requirements remotely with BEAM (Business Event Analysis and Modeling) for designing star schemas. Learn how visual thinking, narrative, a simple script with 7Ws and lots of real and digital Post-it ™ notes can get your stakeholders thinking dimensionally and capturing their own data requirements with agility in-person and at a distance.
Attendees will have the opportunity to vote visually on a virtual whiteboard and should have their smartphones ready to send Lawrence some digital notes to play the ‘7W game’ using the Post-it app.
This session will cover:
Developing a machine learning strategy designed to maximize business value in the age of Deep Learning
Deep Learning is so dominant in some discussions of AI and machine learning that many organizations feel that they need to try to keep up with the latest trends. But does it offer the best path for your organization? What is this technology all about and why should both executives and practitioners understand its history?
All business leaders know that they have to embrace analytics or be left behind. However, technology changes so rapidly that it is difficult to know who to hire, which technologies to embrace, and how to proceed. The truth is that traditional machine learning techniques are a better fit for more organizations than chasing after the latest trends. The hyped techniques are popular for a reason so leaders with a responsibility for analytics need to have a high-level understanding of them.
Learning objectives
Companies rely on modern cloud data architectures to transform their organizations into the agile analytics-driven cultures needed to be competitive and resilient. The modern cloud reference architecture applies data architecture principles into cloud platforms with current database and analytics technologies. However, many organizations quickly get in over their head without a carefully prioritized and actionable roadmap aligned with business initiatives and priorities. Building such a roadmap follows a step-by-step process that produces a valuable communication tool for everyone to deliver together.
This session will cover the four significant steps to align the data strategy and roadmap with the business. We’ll start with translating business strategy into data and analytics strategies with the Enterprise Analytics Capabilities Framework. This is followed with a logical modern cloud reference data architecture that can leverage agile architecture techniques for implementation as a modern data infrastructure on any cloud, hybrid or multi-cloud environment. This will provide the basis for drilling deeper into architecture patterns and developing proficiency with DataOps and MLOps.
This session will cover:
Do you want to generate more value out of your data with less effort and cost?
This presentation will help you to reduce your time to market and increase your development efficiency. Erik discusses projects he has been involved in and explains how he was able to accelerate and streamline them using WhereScape. His main focus will be on a Data Vault 2.0 implementation he was involved in at a large bank.
WhereScape Data Automation software accelerates the design, build, documentation and management of complex data ecosystems. It automates repetitive manual tasks such as hand coding and enables developers can produce architectures in a fraction of the time, without human error.
Read lessThe role of data in business processes has never been more critical. But as we develop new technologies and new skills it feels like we meet new dilemmas at every turn. Concerns about governance and compliance seem to conflict with demands for agility and collaboration. The expanding scope of the data we work with brings new ethical concerns to light.
So, are we doomed to a constant struggle for control of our data assets? I don’t think so. In this session, I’ll sketch out a provocative, but hopefully useful idea – that we have confused ownership and accountability, governance and compliance, openness and collaboration. We’ll look at some potentially new approaches, which aim to resolve some of the complex puzzles of enterprise data.
We have all heard “This is the golden age of data” and “Data is the new oil” but that does not necessarily mean your senior executives are anxious to participate in Conceptual Data Modelling / Concept Modelling. The speaker recently had an interesting exception to the reluctance of senior executives to participate in data modelling. Led by the Chief Strategy Officer, a group of C-level executives and other senior leaders at a mid-size financial institution asked Alec to facilitate three days of Concept Modelling sessions.
Fundamentally, a Concept Model is all about improving communication among various stakeholders, but the communication often gets lost – in the clouds, in the weeds, or somewhere off to the side. This is bad enough in any modelling session, but is completely unacceptable when working at the C-level. Drawing on forty years of successful consulting and modelling experience, this presentation will illustrate core techniques and necessary behaviors to keep even your senior executives involved and engaged,
Key points in the presentation include:
Regression, decision trees, neural networks—along with many other supervised learning techniques—provide powerful predictive insights. Once built, the models can produce key indicators to optimize the allocation of organizational resources.
New users of these established techniques are often impressed with how easy it all seems to be. Modeling software to build these models is widely available but often results in disappointing results. Many fail to even recognize that proper problem definition was the problem. They likely conclude that the data was not capable of better performance.
The deployment phase includes proper model interpretation and looking for clues that the model will perform well on unseen data. Although the predictive power of these machine-learning models can be very impressive, there is no benefit unless they inform value-focused actions. Models must be deployed in an automated fashion to continually support decision-making for residual impact. The instructor will show how to interpret supervised models with an eye toward decisioning automation.
The seminar
In this half-day seminar, Keith McCormick will overview the two most important and foundational techniques in supervised machine learning, and explain why 70-80% or more of everyday problems faced in established industries can be addressed with one particular machine learning strategy. The focus will be on highly practical techniques for maximizing your results whether you are brand new to predictive analytics or you’ve made some attempts but have been disappointed in the results so far. Veteran users of these techniques will also benefit because a comparison will be made between these traditional techniques and some features of newer techniques. We will explore that while tempting, the newer techniques are rarely the best fit except in a handful of niche application areas that many organizations will not face (at least not in the short term). Participants will leave with specific ideas to apply to their current and future projects.
Learning Objectives
Who is it for?
Course Description
1. How to choose the best machine learning strategy
2. Decision Trees: Still the best choice for many everyday challenges
3. Introducing the CART decision tree
4. Additional Supervised Techniques
By the end of this workshop your team will have a sound understanding of how data and analytics can expand, enhance and strengthen your business and your relationships with clients. You’ll have some practical guidelines for strategy, messaging and design which can get you started on your own analytics journey.
Learning objectives
Course Description
1. Introduction: Data as a resource, analytics as a differentiator
We believe that data without analytics is a wasted resource; analytics without action is a wasted effort. We review the value of data to software companies and the potential for analytics as a new line of business.
2. Case studies
Real-world examples of software companies who have developed analytic products and services using a gameplan methodology.
3. Three simple models to get you started
Although there are many ways in which you can leverage data as a resource and analytics as an offering, we have found three to be relatively easy and effective to start with. We’ll review the components and technologies of each, with some guidelines for success and pitfalls to avoid.
4. Communities of practice and tools of choice
When you introduce analytics as a line of business, users and their social interactions, whether in the office or online, will be critical to your success. We show how communities of practice develop around the tools we choose – and we describe how to ensure your tool is chosen.
5. Governance and privacy
In any discussion of data and analytics today, concerns about privacy and compliance always come to the surface. We’ll introduce the subject with enough detail for you take the first, important, practical steps to being well governed for today’s regulatory environment.
6. Narratives and gameplans
These are simple tools for mapping and aligning strategy. However, although simple, they offer subtle and effective capabilities for planning features and releases and for aligning teams such as marketing and management around a vision.
Who’s it for?
Whether you call it a conceptual data model, a domain map, a business object model, or even a “thing model,” a concept model is invaluable to process and architecture initiatives. Why? Because processes, capabilities, and solutions act on “things” – Settle Claim, Register Unit, Resolve Service Issue, and so on. Those things are usually “entities” or “objects” in the concept model, and clarity on “what is one of these things?” contributes immensely to clarity on what the corresponding processes are.
After introducing methods to get people, even C-level executives, engaged in concept modelling, we’ll introduce and get practice with guidelines to ensure proper naming and definition of entities/concepts/business objects. We’ll also see that success depends on recognising that a concept model is a description of a business, not a description of a database. Another key – don’t call it a data model!
Drawing on almost forty years of successful modelling, on projects of every size and type, this session introduces proven techniques backed up with current, real-life examples.
Topics include:
Adopting the DataOps Methodology is helping agile teams deliver data and analytics faster and more manageable in modern data infrastructure and ecosystems. DataOps is critical for companies to become resilient with data and analytics delivery in a volatile and uncertain global business environment. Going beyond DevOps for continuous deployments, DataOps leverages principles from other disciplines to evolve data engineering and management.
Companies need data and analytics more than ever to be agile and competitive in today’s fast-changing environment. DataOps can be an enterprise-wide initiative or an independent agile delivery team working to improve how they deliver data analytics for their customer. Gaining traction takes time and ongoing support.
This seminar will cover:
Course Description
1. Understanding why we need to change
2. Making DataOps Work
The 7 key concepts to focus on for DataOps
The 2 key processes to focus on for DataOps
3. Managing DataOps: defining Metrics and Maturity Models
Regression, decision trees, neural networks—along with many other supervised learning techniques—provide powerful predictive insights. Once built, the models can produce key indicators to optimize the allocation of organizational resources.
New users of these established techniques are often impressed with how easy it all seems to be. Modeling software to build these models is widely available but often results in disappointing results. Many fail to even recognize that proper problem definition was the problem. They likely conclude that the data was not capable of better performance.
The deployment phase includes proper model interpretation and looking for clues that the model will perform well on unseen data. Although the predictive power of these machine-learning models can be very impressive, there is no benefit unless they inform value-focused actions. Models must be deployed in an automated fashion to continually support decision-making for residual impact. The instructor will show how to interpret supervised models with an eye toward decisioning automation.
The seminar
In this half-day seminar, Keith McCormick will overview the two most important and foundational techniques in supervised machine learning, and explain why 70-80% or more of everyday problems faced in established industries can be addressed with one particular machine learning strategy. The focus will be on highly practical techniques for maximizing your results whether you are brand new to predictive analytics or you’ve made some attempts but have been disappointed in the results so far. Veteran users of these techniques will also benefit because a comparison will be made between these traditional techniques and some features of newer techniques. We will explore that while tempting, the newer techniques are rarely the best fit except in a handful of niche application areas that many organizations will not face (at least not in the short term). Participants will leave with specific ideas to apply to their current and future projects.
Learning Objectives
Who is it for?
Course Description
1. How to choose the best machine learning strategy
2. Decision Trees: Still the best choice for many everyday challenges
3. Introducing the CART decision tree
4. Additional Supervised Techniques
By the end of this workshop your team will have a sound understanding of how data and analytics can expand, enhance and strengthen your business and your relationships with clients. You’ll have some practical guidelines for strategy, messaging and design which can get you started on your own analytics journey.
Learning objectives
Course Description
1. Introduction: Data as a resource, analytics as a differentiator
We believe that data without analytics is a wasted resource; analytics without action is a wasted effort. We review the value of data to software companies and the potential for analytics as a new line of business.
2. Case studies
Real-world examples of software companies who have developed analytic products and services using a gameplan methodology.
3. Three simple models to get you started
Although there are many ways in which you can leverage data as a resource and analytics as an offering, we have found three to be relatively easy and effective to start with. We’ll review the components and technologies of each, with some guidelines for success and pitfalls to avoid.
4. Communities of practice and tools of choice
When you introduce analytics as a line of business, users and their social interactions, whether in the office or online, will be critical to your success. We show how communities of practice develop around the tools we choose – and we describe how to ensure your tool is chosen.
5. Governance and privacy
In any discussion of data and analytics today, concerns about privacy and compliance always come to the surface. We’ll introduce the subject with enough detail for you take the first, important, practical steps to being well governed for today’s regulatory environment.
6. Narratives and gameplans
These are simple tools for mapping and aligning strategy. However, although simple, they offer subtle and effective capabilities for planning features and releases and for aligning teams such as marketing and management around a vision.
Who’s it for?
Whether you call it a conceptual data model, a domain map, a business object model, or even a “thing model,” a concept model is invaluable to process and architecture initiatives. Why? Because processes, capabilities, and solutions act on “things” – Settle Claim, Register Unit, Resolve Service Issue, and so on. Those things are usually “entities” or “objects” in the concept model, and clarity on “what is one of these things?” contributes immensely to clarity on what the corresponding processes are.
After introducing methods to get people, even C-level executives, engaged in concept modelling, we’ll introduce and get practice with guidelines to ensure proper naming and definition of entities/concepts/business objects. We’ll also see that success depends on recognising that a concept model is a description of a business, not a description of a database. Another key – don’t call it a data model!
Drawing on almost forty years of successful modelling, on projects of every size and type, this session introduces proven techniques backed up with current, real-life examples.
Topics include:
Adopting the DataOps Methodology is helping agile teams deliver data and analytics faster and more manageable in modern data infrastructure and ecosystems. DataOps is critical for companies to become resilient with data and analytics delivery in a volatile and uncertain global business environment. Going beyond DevOps for continuous deployments, DataOps leverages principles from other disciplines to evolve data engineering and management.
Companies need data and analytics more than ever to be agile and competitive in today’s fast-changing environment. DataOps can be an enterprise-wide initiative or an independent agile delivery team working to improve how they deliver data analytics for their customer. Gaining traction takes time and ongoing support.
This seminar will cover:
Course Description
1. Understanding why we need to change
2. Making DataOps Work
The 7 key concepts to focus on for DataOps
The 2 key processes to focus on for DataOps
3. Managing DataOps: defining Metrics and Maturity Models
Regression, decision trees, neural networks—along with many other supervised learning techniques—provide powerful predictive insights. Once built, the models can produce key indicators to optimize the allocation of organizational resources.
New users of these established techniques are often impressed with how easy it all seems to be. Modeling software to build these models is widely available but often results in disappointing results. Many fail to even recognize that proper problem definition was the problem. They likely conclude that the data was not capable of better performance.
The deployment phase includes proper model interpretation and looking for clues that the model will perform well on unseen data. Although the predictive power of these machine-learning models can be very impressive, there is no benefit unless they inform value-focused actions. Models must be deployed in an automated fashion to continually support decision-making for residual impact. The instructor will show how to interpret supervised models with an eye toward decisioning automation.
The seminar
In this half-day seminar, Keith McCormick will overview the two most important and foundational techniques in supervised machine learning, and explain why 70-80% or more of everyday problems faced in established industries can be addressed with one particular machine learning strategy. The focus will be on highly practical techniques for maximizing your results whether you are brand new to predictive analytics or you’ve made some attempts but have been disappointed in the results so far. Veteran users of these techniques will also benefit because a comparison will be made between these traditional techniques and some features of newer techniques. We will explore that while tempting, the newer techniques are rarely the best fit except in a handful of niche application areas that many organizations will not face (at least not in the short term). Participants will leave with specific ideas to apply to their current and future projects.
Learning Objectives
Who is it for?
Course Description
1. How to choose the best machine learning strategy
2. Decision Trees: Still the best choice for many everyday challenges
3. Introducing the CART decision tree
4. Additional Supervised Techniques
By the end of this workshop your team will have a sound understanding of how data and analytics can expand, enhance and strengthen your business and your relationships with clients. You’ll have some practical guidelines for strategy, messaging and design which can get you started on your own analytics journey.
Learning objectives
Course Description
1. Introduction: Data as a resource, analytics as a differentiator
We believe that data without analytics is a wasted resource; analytics without action is a wasted effort. We review the value of data to software companies and the potential for analytics as a new line of business.
2. Case studies
Real-world examples of software companies who have developed analytic products and services using a gameplan methodology.
3. Three simple models to get you started
Although there are many ways in which you can leverage data as a resource and analytics as an offering, we have found three to be relatively easy and effective to start with. We’ll review the components and technologies of each, with some guidelines for success and pitfalls to avoid.
4. Communities of practice and tools of choice
When you introduce analytics as a line of business, users and their social interactions, whether in the office or online, will be critical to your success. We show how communities of practice develop around the tools we choose – and we describe how to ensure your tool is chosen.
5. Governance and privacy
In any discussion of data and analytics today, concerns about privacy and compliance always come to the surface. We’ll introduce the subject with enough detail for you take the first, important, practical steps to being well governed for today’s regulatory environment.
6. Narratives and gameplans
These are simple tools for mapping and aligning strategy. However, although simple, they offer subtle and effective capabilities for planning features and releases and for aligning teams such as marketing and management around a vision.
Who’s it for?
Whether you call it a conceptual data model, a domain map, a business object model, or even a “thing model,” a concept model is invaluable to process and architecture initiatives. Why? Because processes, capabilities, and solutions act on “things” – Settle Claim, Register Unit, Resolve Service Issue, and so on. Those things are usually “entities” or “objects” in the concept model, and clarity on “what is one of these things?” contributes immensely to clarity on what the corresponding processes are.
After introducing methods to get people, even C-level executives, engaged in concept modelling, we’ll introduce and get practice with guidelines to ensure proper naming and definition of entities/concepts/business objects. We’ll also see that success depends on recognising that a concept model is a description of a business, not a description of a database. Another key – don’t call it a data model!
Drawing on almost forty years of successful modelling, on projects of every size and type, this session introduces proven techniques backed up with current, real-life examples.
Topics include:
Adopting the DataOps Methodology is helping agile teams deliver data and analytics faster and more manageable in modern data infrastructure and ecosystems. DataOps is critical for companies to become resilient with data and analytics delivery in a volatile and uncertain global business environment. Going beyond DevOps for continuous deployments, DataOps leverages principles from other disciplines to evolve data engineering and management.
Companies need data and analytics more than ever to be agile and competitive in today’s fast-changing environment. DataOps can be an enterprise-wide initiative or an independent agile delivery team working to improve how they deliver data analytics for their customer. Gaining traction takes time and ongoing support.
This seminar will cover:
Course Description
1. Understanding why we need to change
2. Making DataOps Work
The 7 key concepts to focus on for DataOps
The 2 key processes to focus on for DataOps
3. Managing DataOps: defining Metrics and Maturity Models
Regression, decision trees, neural networks—along with many other supervised learning techniques—provide powerful predictive insights. Once built, the models can produce key indicators to optimize the allocation of organizational resources.
New users of these established techniques are often impressed with how easy it all seems to be. Modeling software to build these models is widely available but often results in disappointing results. Many fail to even recognize that proper problem definition was the problem. They likely conclude that the data was not capable of better performance.
The deployment phase includes proper model interpretation and looking for clues that the model will perform well on unseen data. Although the predictive power of these machine-learning models can be very impressive, there is no benefit unless they inform value-focused actions. Models must be deployed in an automated fashion to continually support decision-making for residual impact. The instructor will show how to interpret supervised models with an eye toward decisioning automation.
The seminar
In this half-day seminar, Keith McCormick will overview the two most important and foundational techniques in supervised machine learning, and explain why 70-80% or more of everyday problems faced in established industries can be addressed with one particular machine learning strategy. The focus will be on highly practical techniques for maximizing your results whether you are brand new to predictive analytics or you’ve made some attempts but have been disappointed in the results so far. Veteran users of these techniques will also benefit because a comparison will be made between these traditional techniques and some features of newer techniques. We will explore that while tempting, the newer techniques are rarely the best fit except in a handful of niche application areas that many organizations will not face (at least not in the short term). Participants will leave with specific ideas to apply to their current and future projects.
Learning Objectives
Who is it for?
Course Description
1. How to choose the best machine learning strategy
2. Decision Trees: Still the best choice for many everyday challenges
3. Introducing the CART decision tree
4. Additional Supervised Techniques
By the end of this workshop your team will have a sound understanding of how data and analytics can expand, enhance and strengthen your business and your relationships with clients. You’ll have some practical guidelines for strategy, messaging and design which can get you started on your own analytics journey.
Learning objectives
Course Description
1. Introduction: Data as a resource, analytics as a differentiator
We believe that data without analytics is a wasted resource; analytics without action is a wasted effort. We review the value of data to software companies and the potential for analytics as a new line of business.
2. Case studies
Real-world examples of software companies who have developed analytic products and services using a gameplan methodology.
3. Three simple models to get you started
Although there are many ways in which you can leverage data as a resource and analytics as an offering, we have found three to be relatively easy and effective to start with. We’ll review the components and technologies of each, with some guidelines for success and pitfalls to avoid.
4. Communities of practice and tools of choice
When you introduce analytics as a line of business, users and their social interactions, whether in the office or online, will be critical to your success. We show how communities of practice develop around the tools we choose – and we describe how to ensure your tool is chosen.
5. Governance and privacy
In any discussion of data and analytics today, concerns about privacy and compliance always come to the surface. We’ll introduce the subject with enough detail for you take the first, important, practical steps to being well governed for today’s regulatory environment.
6. Narratives and gameplans
These are simple tools for mapping and aligning strategy. However, although simple, they offer subtle and effective capabilities for planning features and releases and for aligning teams such as marketing and management around a vision.
Who’s it for?
Whether you call it a conceptual data model, a domain map, a business object model, or even a “thing model,” a concept model is invaluable to process and architecture initiatives. Why? Because processes, capabilities, and solutions act on “things” – Settle Claim, Register Unit, Resolve Service Issue, and so on. Those things are usually “entities” or “objects” in the concept model, and clarity on “what is one of these things?” contributes immensely to clarity on what the corresponding processes are.
After introducing methods to get people, even C-level executives, engaged in concept modelling, we’ll introduce and get practice with guidelines to ensure proper naming and definition of entities/concepts/business objects. We’ll also see that success depends on recognising that a concept model is a description of a business, not a description of a database. Another key – don’t call it a data model!
Drawing on almost forty years of successful modelling, on projects of every size and type, this session introduces proven techniques backed up with current, real-life examples.
Topics include:
Adopting the DataOps Methodology is helping agile teams deliver data and analytics faster and more manageable in modern data infrastructure and ecosystems. DataOps is critical for companies to become resilient with data and analytics delivery in a volatile and uncertain global business environment. Going beyond DevOps for continuous deployments, DataOps leverages principles from other disciplines to evolve data engineering and management.
Companies need data and analytics more than ever to be agile and competitive in today’s fast-changing environment. DataOps can be an enterprise-wide initiative or an independent agile delivery team working to improve how they deliver data analytics for their customer. Gaining traction takes time and ongoing support.
This seminar will cover:
Course Description
1. Understanding why we need to change
2. Making DataOps Work
The 7 key concepts to focus on for DataOps
The 2 key processes to focus on for DataOps
3. Managing DataOps: defining Metrics and Maturity Models
Limited time? Join one day!
Can you only attend one day? It is possible to attend only the first or only the second conference day and of course the full conference. Delegates also gain four months access to the conference recordings of the elected day so there’s no need to miss out on any session.
Payment by credit card is also available. Please mention this in the Comment-field upon registration and find further instructions for credit card payment on our customer service page.
View the Adept Events calendar
“Longer sessions created room for more depth and dialogue. That is what I appreciate about this summit.”
“Inspiring summit with excellent speakers, covering the topics well and from different angles. Organization and venue: very good!”
“Inspiring and well-organized conference. Present-day topics with many practical guidelines, best practices and do's and don'ts regarding information architecture such as big data, data lakes, data virtualisation and a logical data warehouse.”
“A fun event and you learn a lot!”
“As a BI Consultant I feel inspired to recommend this conference to everyone looking for practical tools to implement a long term BI Customer Service.”
“Very good, as usual!”