Code splitting with Sitecore JSS + React

Page weight - how much data a user needs to download to view your website - is a big deal in JavaScript applications. The more script that an application loads, the longer it takes to render for a user - especially in critical mobile scenarios. The longer it takes an app to render, the less happy the users of that app are. JavaScript is especially important to keep lightweight, because JS is not merely downloaded like an image - it also has to be parsed and compiled by the browser. Especially on slower mobile devices, this parsing can take longer than the download! So less script is a very good thing.

Imagine a large Sitecore JSS application, with a large number of JavaScript components. With the default JSS applications the entire app JS must be deployed to the user when any page in the application loads. This is simple to reason about and performs well with smaller sites, but on a large site it is detrimental to performance if the home page must load 40 components that are not used on that route in order to render.

Enter Code Splitting

Code Splitting is a term for breaking up your app’s JS into several chunks, usually via webpack. There are many ways that code splitting can be set up, but we’ll focus on two popular automatic techniques: route-level code splitting, and component-level code splitting.

Route-level code splitting creates a JS bundle for each route in an application. Because of this, it relies on the app using static routing - in other words knowing all routes in advance, and having static components on those routes. This is probably the most widespread code splitting technique, but it is fundamentally incompatible with JSS because the app’s structure and layout is defined by Sitecore. We do not know all of the routes that an app has at build time, nor do we know which components are on those routes because that is also defined by Sitecore.

Component-level code splitting creates a JS bundle for each component in an application. This results in quite granular bundles, but overall excellent compatibility with JSS because it works great with dynamic routing - we only need to load the JS for the components that an author has added to a given route, and they’re individually cacheable by the browser providing great caching across routes too.

Component-level Code Splitting with React

The react-loadable library provides excellent component-level code splitting capabilities to React apps. Let’s add it to the JSS React app and split up our components!

Step 1: Add react-loadable

We need some extra npm packages to make this work.

1
2
3
4
5
6
7
// yarn
yarn add react-loadable
yarn add babel-plugin-syntax-dynamic-import babel-plugin-dynamic-import-node --dev

// npm
npm i react-loadable
npm i babel-plugin-syntax-dynamic-import babel-plugin-dynamic-import-node --save-dev

Step 2: Make the componentFactory use code splitting

In order to use code splitting, we have to tell create-react-app (which uses webpack) how to split our output JS. This is pretty easy using dynamic import, which works like a normal import or require but loads the module lazily at runtime. react-loadable provides a simple syntax to wrap any React component in a lazy-loading shell.

In JSS applications, the Component Factory is a mapping of the names of components into the implementation of those components - for example to allow the JSS app to resolve the component named 'ContentBlock', provided by the Sitecore Layout Service, to a React component defined in ContentBlock.js. The Component Factory is a perfect place to put component-level code splitting.

In a JSS React app, the Component Factory is generated code by default - inferring the components to register based on filesystem conventions. The /scripts/generate-component-factory.js file defines how the code is generated. The generated code - created when a build starts - is emitted to /src/temp/componentFactory.js. Before we alter the code generator to generate split components, let’s compare registering a component in each way:

JSS React standard componentFactory.js
1
2
3
4
5
6
// static import
import ContentBlock from '../components/ContentBlock';

// create component map (identical code)
const components = new Map();
components.set('ContentBlock', ContentBlock);
react-loadable componentFactory.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import React from 'react';
import Loadable from 'react-loadable';

// loadable dynamic import component - lazily loads the component implementation when it is first used
const ContentBlock = Loadable({
// setting webpackChunkName lets us have a nice chunk filename like ContentBlock.hash.js instead of 1.hash.js
loader: () => import(/* webpackChunkName: "ContentBlock" */ '../components/ContentBlock'),
// this is a react component shown while lazy loading. See the react-loadable docs for guidance on making a good one.
loading: () => <div>Loading...</div>,
// this module name should match the webpackChunkName that was set. This is used to determine dependency during server-side rendering.
modules: ['ContentBlock'],
});

// create component map (identical code)
const components = new Map();
components.set('ContentBlock', ContentBlock);

Updating the Component Factory Code Generation

In order to have our component factory use splitting, let’s update the code generator to emit react-loadable component definitions.

Modify /scripts/generate-component-factory.js:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
// add this function
function LoadableComponent(importVarName, componentFolder) {
return `const ${importVarName} = Loadable({
loader: () => import(/* webpackChunkName: "${componentFolder}" */ '../components/${componentFolder}'),
loading: () => <div>Loading...</div>,
modules: ['${componentFolder}'],
});`;
}

// modify generateComponentFactory()...

// after const imports = [];
imports.push(`import React from 'react';`);
imports.push(`import Loadable from 'react-loadable';`);

// change imports.push(``import ${importVarName} from '../components/${componentFolder}';``); to
imports.push(LoadableComponent(importVarName, componentFolder));

You can find a completed gist of these changes here. Search in it for [CS] to see each change in context. Don’t copy the whole file, in case of future changes to the rest of the loader.

Try it!

Start your app up with jss start. At this point Code Splitting should be working: you should see a JS file get loaded for each component on a route, and a short flash of Loading... when the route initially loads.

But it still has some issues that could make it more usable. If the app is server-side rendered in headless or integrated modes none of the content will be present because the dynamic imports are asynchronous and have not resolved before the SSR completes. We’d also love to avoid that flash of loading text if the page was server-side rendered, too. Well guess what, we can do all of that!

Step 3: Configure code splitting for Server-Side Rendering

Server-side rendering with code splitting is a bit more complex. There are several pieces that the app needs to support:

  • Preload all lazy loaded components, so that they render immediately during server-side rendering instead of starting to load async and leaving a loading message in the SSR HTML.
  • Determine which lazy loaded components were used during rendering, so that we can preload the same components’ JS files on the client-side to avoid the flash of loading text.
  • Emit <script> tags to preload the used components’ JS files on the client side into the SSR HTML.

3.1: Configure SSR Webpack to understand dynamic import

The build of the server-side JS bundle is separate from the client bundle. We need to teach the server-side build how to compile the dynamic import expressions. Open /server/server.webpack.config.js.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// add these after other imports
const dynamicImport = require('babel-plugin-syntax-dynamic-import');
const dynamicImportNode = require('babel-plugin-dynamic-import-node');
const loadableBabel = require('react-loadable/babel');

// add the plugins to your babel-loader section
//...
use: {
loader: 'babel-loader',
options: {
babelrc: false,
presets: [env, stage0, reactApp],
// [CS] ADDED FOR CODE SPLITTING
plugins: [dynamicImport, dynamicImportNode, loadableBabel],
},

You can find a completed gist of these changes here. Search in it for [CS] to see each change in context. Don’t copy the whole file, in case of future changes to the rest of the webpack config.

3.2: Configure server.js

The /server/server.js is the entry point to the JSS React app when it’s rendered on the server-side. We need to teach this entry point how to successfully execute SSR with lazy loaded components, and to emit preload script tags for used components.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
// add to the top
import Loadable from 'react-loadable';
import manifest from '../build/asset-manifest.json';

function convertLoadableModulesToScripts(usedModules) {
return Object.keys(manifest)
.filter((chunkName) => usedModules.indexOf(chunkName.replace('.js', '')) > -1)
.map((k) => `<script src="${manifest[k]}"></script>`)
.join('');
}

// add after const graphQLClient...
const loadableModules = [];

// add after initializei18n()...
.then(() => Loadable.preloadAll())

// wrap the `<AppRoot>` component with the loadable used-component-capture component
<Loadable.Capture report={(module) => loadableModules.push(module)}>
<AppRoot path={path} Router={StaticRouter} graphQLClient={graphQLClient} />
</Loadable.Capture>

// append another .replace() to the rendered HTML transformations
.replace('<script>', `${convertLoadableModulesToScripts(loadableModules)}<script>`);

You can find a completed gist of these changes here with better explanatory comments. Search in it for [CS] to see each change in context. Don’t copy the whole file, in case of future changes to the rest of the entry point.

3.3: Configure client-side index.js

The /src/index.js is the entry point to the JSS React app when it’s rendered on the browser-side. We need to teach this entry point how to wait to ensure that all preloaded components that SSR may have emitted to the page are done loading before we render the JSS app the first time to avoid a flash of loading text.

1
2
3
4
5
// add to the top
import Loadable from 'react-loadable';

// add after i18ninit()
.then(() => Loadable.preloadReady())

You can find a completed gist of these changes here. Search in it for [CS] to see each change in context. Don’t copy the whole file, in case of future changes to the rest of the entry point.

Step 4: Try it out

With the code changes to enable splitting complete, deploy your app to Sitecore and try it in integrated mode. You should see the SSR HTML include a script tag for every component used on the route, and the rendering will wait until the components have preloaded before showing the application. This preloading means the browser does not have to wait for React to boot up before beginning load of the components, resulting in a much faster page load time.

The ideal component loading technique for each app will be different depending on the number and size of each component. Using the standard JSS styleguide sample app, enabling component code-splitting like this resulted in transferring almost 40k less data when loading the home page (which has a single component) vs the styleguide page (which has many components). This difference increases with the total number of components in a JSS app - but for most apps, code splitting is a smart idea if the app has many components that are used on only a few pages.

Sources

Deploying Disconnected JSS Apps

It’s possible to deploy server-side rendered Sitecore JSS sites in disconnected mode. When deployed this way, the JSS app will run using disconnected layout and content data, and will not use a Sitecore backend.

Why would I want this?

In a word, previewing. Imagine during early development and prototyping of a JSS implementation. There’s a team of designers, UX architects, and frontend developers who are designing the app and its interactions. In most cases, Sitecore developers may not be involved yet - or if they are involved, there is no Sitecore instance set up.

This is one of the major advantages of JSS - using disconnected mode, a team like this can develop non-throwaway frontend for the final JSS app. But stakeholders will want to review the in-progress JSS app somewhere other than http://localhost:3001, so how do we put a JSS site somewhere shared without having a Sitecore backend?

Wondering about real-world usage?
The JSS docs use this technique.

How does it work?

Running a disconnected JSS app is a lot like headless mode: a reverse proxy is set up that proxies incoming requests to Layout Service, then transforms the result of the LS call into HTML using JS server-side rendering and returns it. In the case of disconnected deployment instead of the proxy sending requests to the Sitecore hosted Layout Service, the requests are proxied to the disconnected layout service.

Setting up a disconnected app step by step

To deploy a disconnected app you’ll need a Node-compatible host. This is easiest with something like Heroku or another PaaS Node host, but it can also be done on any machine that can run Node. For our example, we’ll use Heroku.

Configuring the app for disconnected deployment

Any of the JSS sample templates will work for this technique. Create yourself a JSS app with the CLI in 5 minutes if you need one to try.

  1. Ensure the app has no scjssconfig.json in the root. This will make the build use the local layout service.
  2. Create a build of the JSS app with jss build. This will build the artifacts that the app needs to run.
  3. Install npm packages necessary to host a disconnected server: yarn add @sitecore-jss/sitecore-jss-proxy express (substitute npm i --save if you use npm instead of yarn)
  4. Deploy the following code to /scripts/disconnected-ssr.js (or similar path). Note: this code is set up for React, and will require minor tweaks for Angular or Vue samples (build -> dist)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    const express = require('express');
    const { appName, language, sitecoreDistPath } = require('../package.json').config;
    const scProxy = require('@sitecore-jss/sitecore-jss-proxy').default;
    const { createDefaultDisconnectedServer } = require('@sitecore-jss/sitecore-jss-dev-tools');
    const app = require('../build/server.bundle');

    const server = express();

    // the port the disconnected app will run on
    // Node hosts usually pass a port to run on using a CLI argument
    const port = process.argv[2] || 8080;

    // create a JSS disconnected-mode server
    createDefaultDisconnectedServer({
    port,
    appRoot: __dirname,
    appName,
    language,
    server,
    afterMiddlewareRegistered: (expressInstance) => {
    // to make disconnected SSR work, we need to add additional middleware (beyond mock layout service) to handle
    // local static build artifacts, and to handle SSR by loopback proxying to the disconnected
    // layout service on the same express server

    // Serve static app assets from local /build folder into the sitecoreDistPath setting
    // Note: for Angular and Vue samples, change /build to /dist to match where they emit build artifacts
    expressInstance.use(
    sitecoreDistPath,
    express.static('build', {
    fallthrough: false, // force 404 for unknown assets under /dist
    })
    );

    const ssrProxyConfig = {
    // api host = self, because this server hosts the disconnected layout service
    apiHost: `http://localhost:${port}`,
    layoutServiceRoute: '/sitecore/api/layout/render/jss',
    apiKey: 'NA',
    pathRewriteExcludeRoutes: ['/dist', '/build', '/assets', '/sitecore/api', '/api'],
    debug: false,
    maxResponseSizeBytes: 10 * 1024 * 1024,
    proxyOptions: {
    headers: {
    'Cache-Control': 'no-cache',
    },
    },
    };

    // For any other requests, we render app routes server-side and return them
    expressInstance.use('*', scProxy(app.renderView, ssrProxyConfig, app.parseRouteUrl));
    },
    });
  5. Test it out. From a console in the app root, run node ./scripts/disconnected-ssr.js. Then in a browser, open http://localhost:8080 to see it in action!

Deploying the disconnected app to Heroku

Heroku is a very easy to use PaaS Node host, but you can also deploy to Azure App Service or any other service that can host Node. To get started, sign up for a Heroku account and install and configure the Heroku CLI.

  1. We need to tell Heroku to build our app when it’s deployed.

    • Locate the scripts section in the package.json
    • Add the following script:
      1
      "postinstall": "npm run build"`
  2. We need to tell Heroku the command to use to start our app.

    • Create a file in the app root called Procfile
    • Place the following contents:
      1
      web: node ./scripts/disconnected-ssr.js $PORT
  3. To deploy to Heroku, we’ll use Git. Heroku provides us a Git remote that we can push to that will deploy our app. To use Git, we need to make our app a Git repository:

    1
    2
    3
    git init
    git add -A
    git commit -m "Initial commit"
  4. Create the Heroku app. This will create the app in Heroku and configure the Git remote to deploy to it. Using a console in your app root:

    1
    heroku create <your-heroku-app-name>
  5. Configure Heroku to install node devDependencies (which we need to start the app in disconnected mode). Run the following command:

    1
    heroku config:set NPM_CONFIG_PRODUCTION=false YARN_PRODUCTION=false
  6. Deploy the JSS app to Heroku:

    1
    git push -u heroku master
  7. Your JSS app should be running at https://<yourappname>.herokuapp.com!

In case it’s not obvious, do not use this setup in production. The JSS disconnected server is not designed to handle heavy production load.

Announcing Sitecore JSS XSLT Support

Sitecore Team X is proud to announce the final public release of Sitecore JavaScript Services (JSS) with full XSLT 3.0 support!

Why XSLT 3.0?

XSLT 3.0 allows for JSON transformations, so you can use the full power of modern JSS while retaining the XSLT developer experience that Site Core developers know and love.

Our XSLT 3.0 engine allows for client-side rendering by transforming hard-to-read JSON into plain, sensible XML using XSLT 3.0 standards-compliant JSON-to-XML transformations. Instead of ugly JSON, your JSS renderings can simple, easy to read XML like this:

1
2
3
4
5
6
7
8
9
10
<j:map xmlns:j="http://www.w3.org/2013/XSL/json">
<j:map key="fields">
<j:map key="title">
<j:map key="value">
<j:string key="editable">SiteCore Experience Platform + JSS + XSLT</j:string>
<j:string key="value">SiteCore Experience Platform + JSS + XSLT</j:string>
</j:map>
</j:map>
</j:map>
</j:map>

It’s just as simple to make a JSS XSLT to transform your rendering output. Check out this super simple “hello world” sample:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:math="http://www.w3.org/2005/xpath-functions/math"
xmlns:xd="http://www.oxygenxml.com/ns/doc/xsl"
xmlns:SiteCore="http://www.SiteCore.com/demandMore/xslt"
xmlns:h="http://www.w3.org/1999/xhtml"
xmlns:fn="http://www.w3.org/2005/xpath-functions"
xmlns:j="http://www.w3.org/2005/xpath-functions"
exclude-result-prefixes="xs math xd h SiteCore"
version="3.0"
expand-text="yes"
>
<xsl:output method="text" indent="yes" media-type="text/json" omit-xml-declaration="yes"/>
<xsl:variable name="fields-a" select="json-to-xml(/)"/>
<xsl:template match="/">
<xsl:variable name="fields-b">
<xsl:apply-templates select="$fields-a/*"/>
</xsl:variable>
{xml-to-json($fields-b,map{'indent':true()})}
</xsl:template>
<xsl:template match="/j:map">
<j:map>
<j:array key="fields">
<xsl:apply-templates select="j:map[@key='fields']/j:map" mode="rendering"/>
</j:array>
</j:map>
</xsl:template>
<xsl:template match="j:map" mode="rendering">
<j:map>
<j:string key="title">{j:string[@key='title:value']||' '||j:string
[@key='title:editable']}</j:string>
<xsl:if test="j:boolean [@key='experienceEditor']">
<j:string key="editable">{j:string
[@key='editable']/text()}</j:string>
</xsl:if>
</j:map>
</xsl:template>
</xsl:stylesheet>

JSON transformations allow XSLT 3.0 to be a transformative force on the modern web. Expect to see recruiters demand 10 years of XSLT 3.0 experience for Site-core candidates within the next year - this is a technology you will not want to miss out on learning.

Dynamic XSLT with VBScript

Modern JavaScript is way too difficult, so we’ve implemented a feature that lets you define dynamic XSLT templates using ultra-modern VBScript:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:math="http://www.w3.org/2005/xpath-functions/math"
xmlns:xd="http://www.oxygenxml.com/ns/doc/xsl"
xmlns:SiteCore="http://www.SiteCore.com/demandMore/xslt"
xmlns:h="http://www.w3.org/1999/xhtml"
xmlns:fn="http://www.w3.org/2005/xpath-functions"
xmlns:j="http://www.w3.org/2005/xpath-functions"
exclude-result-prefixes="xs math xd h SiteCore"
version="3.0"
expand-text="yes"
>
<xsl:output method="text" indent="yes" media-type="text/json" omit-xml-declaration="yes"/>
<xsl:variable name="fields-a" select="json-to-xml(/)"/>
<xsl:template match="/">
<xsl:variable name="fields-b">
<xsl:apply-templates select="$fields-a/*"/>
</xsl:variable>
{xml-to-json($fields-b,map{'indent':true()})}
</xsl:template>
<xsl:template match="/j:map" type="text/vbscript">
Dim txtValue = xmlns.SiteCore.rendering.title.value
Dim txtEditable = xmlns.SiteCore.rendering.title.value

Sub xsltRender(txtVBProgrammers, txtAreDim)
document.write(txtVBProgrammers)
End Sub
</xsl:template>
<xsl:template match="j:map" mode="rendering">
<j:map>
<j:string key="title">{j:string[@key='title:value']||' '||j:string
[@key='title:editable']}</j:string>
<xsl:if test="j:boolean [@key='experienceEditor']">
<j:string key="editable">{j:string
[@key='editable']/text()}</j:string>
</xsl:if>
</j:map>
</xsl:template>
</xsl:stylesheet>

With a quick piece of simple VBScript like that, you’ll be making awesome JSS pages like this one in no time!

How can I get this XSLT goodness?

Glad you asked. Download it right here!

What’s next for JSS

The Sitecore JSS team is always looking for opportunities to improve JSS and make it compatible with the most modern technologies. Experimentation is already under way to add ColdFusion scripting support for XSLT 3 JSS renderings, and enable PHP server-side rendering for your SiteCore solutions.

Just another way that we help you succeed in your Site core implementations.

Send us feedback on our roadmap

The lazy developer's way to install Sitecore 9

Since Sitecore 9 was released, there’s been a lot of talk about the new installation techniques that it necessitates - namely, the move towards infrastructure as code and the Sitecore Install Framework (SIF). It’s no secret that installing Sitecore 9 can be a bit more difficult than previous versions, but it really doesn’t have to be.

This is the part where you might be expecting me to announce some crazy script I wrote, but not this time because someone else already did the work. So let’s address the elephant in the room.

Solr the easy way

Back in the day I wrote some scripts to install Solr using Bitnami. It worked, but I’d always wanted to find the time to make it simpler and less dependent on Bitnami and their notoriously hard to find older versions. Well Jeremy Davis did exactly what I wanted to do and scripted the whole Solr install, locally trusted SSL certificate, and installation as a service. You can also just skip straight to the gist of the PowerShell you need to run.

Seriously, it’s awesome and you should use it especially for local dev setups.

A few things I noted when I used it:

  • Change the download URLs for Solr and NSSM to be https (encrypted). They work fine that way.
  • You must install the Java Runtime Environment (JRE) first and plug in the right version - it won’t do it for you
  • Make sure to add the $SolrHost value to your hosts file before you run the script so that it can resolve with the SSL certificate correctly (it will be bound to that name; don’t use localhost).

SIF the easy way with SIFless

SIF is a pretty amazing tool, but it has two shortcomings: one, that it’s great for automated infrastructure but not so great for a quick local setup and two, that it doesn’t yet have an uninstall feature. Well Rob Ahnemann wrote a handy GUI for SIF called SIFless that fixes both of those issues, making quick setups with mostly default settings easy and generating hackable SIF PowerShell scripts that let you do whatever advanced things you want after using the GUI to get started. And it generates uninstall scripts too that get rid of the windows services, solr cores, and other artifacts that are left when you want to tear down that test site.

A few things to be aware of with SIFless:

  • Despite the amusing name, SIFless does require SIF to be installed!
  • The Solr URL needs to be the path to the Solr admin panel (e.g. not https://mysolr:8983, but https://mysolr:8983/solr)
  • The Solr physical path needs to be to the root of the Solr instance (if it’s the right place you’ll see ‘bin’ and ‘server’ folders; if you used the script above with defaults this would be C:\solr\solr-6.6.2)

Go forth and use Sitecore 9

Using these two tools I went from having no Solr and no Sitecore installations to having a fully operational battle station Sitecore 9 instance with xConnect in about 45 minutes. And that includes debugging my own silly mistakes. I bet you can do it faster. Get thee to a PowerShell console!

time to get SIFty

I'm a Sitecorian!

I am excited to announce that I am joining the Sitecore product team as a Platform Architect!

wth

Now normally this wouldn’t merit a whole blog post, and we’d just let the recruiters find out about it on LinkedIn. But I’m sure many folks’ next question would be around all the libraries that I maintain and what will happen to them. So let’s address the elephant in the room:

not a thing

Unicorn, Rainbow, and Dianoga

These will continue exactly as they are today as independent, community driven projects. I will still be the maintainer. The license will remain MIT.

This also includes the dependency libraries that these projects use (e.g. Configy, WebConsole, MicroCHAP).

Synthesis & Leprechaun

Ok hold up: let’s first define what Leprechaun _is_ because I haven’t publicly spoken about it yet. It’s a stable command-line code generator that works from Rainbow serialized items. Kinda like the T4 templates that a lot of people use except that it’s better because:

  • Uses Roslyn and C# Scripting, so it can run outside Visual Studio (e.g. on a CI server)
  • Ridiculously faster than T4
  • Has a watch mode that provides instant regeneration when saving templates in Sitecore
  • Uses the same configuration system as Unicorn does, so it’s familiar and simple to configure

Leprechaun is currently working in production on a couple sites, but does not have complete documentation so it may require a bit more spelunking to use. Currently it supports Synthesis out of the box, but it’s easy to add or change code generation templates.

Ok back to what’s happening to these projects. For the last year or so it’s been difficult to come up with the time and inclination to give Synthesis and Leprechaun the love they deserve. In order to get them that love, I am ceding maintainership to the excellent Ben Lipson. Ben is talented developer and Sitecore MVP with a lot of good ideas about where to take these tools. He’ll do a great job.

Aside from transferring the repositories to Ben, nothing else is changing.

Will you be disappearing from the community?

nope

No. #venting 4lyfe.

What will you be working on at Sitecore?

I’ll be on Team X, led by the illustrious Alex Shyba. In other words, if I told you I’d have to kill you.

actually it's JSS

/giphy #magic8ball "Will this be awesome?"

yes

Quickly add SSL to Solr

There have been several people recently who I’ve seen having trouble setting up SSL for their Solr in order to use it with Sitecore 9. So, I present the following gist to you. It’s designed to automate the complete setup process of adding SSL to Solr with a self-signed certificate, and trusting that self-signed certificate. For production setups with a real certificate, it should be quite easy to modify.

It’s been tested on standalone as well as Bitnami Solr. The script requires Windows 10 to use the Import-PfxCertificate cmdlet; if you don’t have that you can remove the trust scripting and do it manually.

giphy

All about xConnect Security

Sitecore 9 introduces the new xConnect server to the ecosystem. xConnect is an abstracted service layer that Sitecore uses for all its analytics and marketing automation features. If you’re using Sitecore XP (aka xDB), you’ll need an xConnect server if you upgrade to Sitecore 9.

xConnect is noteworthy because it introduces client certificate authentication for the Sitecore XP server to communicate with xConnect. Certificates are a complex subject, and can fail in any number of less than helpful ways. This post aims to help you understand how certificates work in Sitecore 9, and provide you some tools to diagnose what’s wrong when they are not working right.

What is TLS?

In order to understand how xConnect works, it’s important to understand what’s going on: Transport Layer Security (TLS). You may also think of this as “SSL” or “HTTPS.”

TLS is a protocol for establishing secure encrypted connections between a server and a client. The key aspect of TLS is that the client and server can securely exchange encryption keys in such a way that they cannot be observed by malicious parties that may be watching the exchange.

Asymmetric vs Symmetric Encryption

To understand how TLS works, it’s important to understand the distinction between Asymmetric (also called Public Key) Encryption, and Symmetric Encryption.

If you ever made secret codes as a kid, you’ve probably used symmetric encryption. This is where the sender and receiver both need to know a key to decrypt the message, for example a simple shift cipher where D = A, E = B, and so forth. Julius Caesar famously sent secret messages by shifting letters three places forward like this. Symmetric encryption does have one major downfall, however: posession of the secret key lets you read any encrypted message even if not the intended recipient.

Asymmetric encryption on the other hand uses two different keys: a public key and a private key. The public key can be shared with anyone without compromising anything. However a client can use the public key to encrypt a message in such a way that it can only be decrypted with the server’s private key. In this way, you can receive private encrypted messages from clients you don’t share any secrets with - but they can still send the server private messages.

TLS uses asymmetric encryption to transfer an encryption key for symmetric encryption, which is used for ongoing data transfer over the encrypted connection. This is done because asymmetric encryption is much much slower than symmetric.

It’s important to understand the difference between public and private keys when you set up Sitecore 9, because they need to be deployed to different servers in your infrastructure. A certificate generally includes both a public and private key, however it can also include only a public key.

xConnect Setup

xConnect uses mutual authentication to secure the connections between it and the Sitecore XP server. This is accomplished using TLS client certificates.

If you’ve worked with SSL certificates before, this is a stronger form of SSL where not only does the client have to trust the server, but the server also has to trust a second certificate issued to the client. In this case, the client is the Sitecore XP server, and the server is the xConnect server. Let’s take a look at how this works:

SSL Server Certificate Negotiation

All SSL connections go through this process, whether xConnect or otherwise. In a standard Sitecore 9 XP installation, the xConnect server will have the server certificate installed. The Sitecore XP server will only have a server certificate if access to Sitecore itself, e.g. for administration, is done via SSL (in which case it will likely be a separate server certificate from xConnect’s).

  1. Client prepares to make a HTTPS request (e.g. you ask for https://xconnect)
  2. Client sends a ClientHello message to the server. This proposes encryption standards, among other things.
  3. The server replies with a ServerHello message back to the client. This includes the server’s public key, and the encryption standards that the server has selected from what the client proposed in the ClientHello.
  4. The client validates the server certificate (e.g. must have correct domain and trusted issuer)
  5. A symmetric encryption key is generated and exchanged using the server’s public key
  6. Now that an encrypted connection is established, a normal HTTP request is sent over the encrypted channel

What can go wrong with server certificate negotiation

The most common issues are domain mismatches and untrusted certificates. Generally you can diagnose issues with server certificates using a web browser - request the site over HTTPS and review the error shown in the browser. Make sure you request the xConnect server URL, not the Sitecore XP URL if you are diagnosing an xConnect connectivity issue.

Domain Mismatches

A domain mismatch occurs when a certificate’s domain does not match the domain being requested. For example, a certificate issued to sitecore.net will fail this validation if the site you’re requesting is https://foo.local. Certificates may also be issued using wildcards (e.g. *.sitecore.net). Note that wildcards apply to one level of subdomains only - so in the previous example sitecore.net or foo.sitecore.net would be valid, but bar.foo.sitecore.net would not be.

Domain matching is done based on the host header the server receives. For example if the xConnect server is https://xconnect but can also be accessed via https://127.0.0.1, the certificate will be invalid if the IP address is used because the certificate was not issued for 127.0.0.1.

If you have a domain mismatch issue, you will need to either get a new certificate (and update the xConnect IIS site(s) to use the new certificate) or change the domain for xConnect to one that is valid for the certificate.

Untrusted Certificates

To understand trust issues, it’s important to understand how certificates are issued. Certificates are issued by other certificates.

In fact, certificates can be issued in chains (Xzibit would definitely approve). Trust issues occur when the certificate that issued the server certificate is not considered to be trusted by the client. On Windows, trust is established by being included in the Trusted Root Certification Authorities in the machine certificates:

Note that to trust a certificate, only the public key for the server certificate must be imported here. If you’re using self-signed certificates that issued themselves - like localhost in the screenshot - you can add the certificate itself to the trusted root certificates by exporting it and reimporting it into the root certificates. If using a commercially issued certificate, that certification authority’s root certificates must be added to the trusted root - in most cases, they are already present.

More esoteric errors

There are some less common issues that can also cause server certificate negotiation errors. Servers will be commonly secured against supporting vulnerable ciphers, hash algorithms or SSL protocol versions. You might have heard of Heartbleed or POODLE vulnerabilities, or had to support TLS 1.2 if working with some web APIs such as SalesForce. This is a good idea, but if the server and client cannot mutually agree on a supported cipher, hash, and protocol version the connection will fail. If the certificate is trusted and has the correct domain, this would be the next thing to check.

If you’ve never heard of this before, you can secure your IIS servers using a tool like IISCrypto. Go do it now, this post will wait.

Note that the .NET HTTP client with framework versions prior to 4.6.2 defaults to only supporting TLS up to 1.1. Many modern security scripts will disable all TLS protocol versions except for 1.2, which will cause HTTP requests from clients with earlier versions of the .NET framework installed to fail.

SSL Client Certificate Negotiation

Hopefully now you have a decent idea of how server certificates work. But xConnect also uses client certificates. A client certificate enables mutual authentication. With only a server certificate, the client must decide to trust the server but the server has no way to know if it should trust the client. Enter client certificates.

A client certificate is essentially the opposite of the server certificate. When using a client certificate, the negotiation works similarly to the server certificate, except that when the server sends the ServerHello (#3 above) it requests a client certificate in addition to sending its public key. The client then sends the public key of its client certificate back to the server - and then the server decides whether it should trust the client certificate.

If the client certificate is not trusted, it is rejected. The rules for validating a client certificate are up to the server and do not necessarily follow the same validation rules as a server certificate on the client. In the case of xConnect:

  • The domain/subject on the client certificate does not seem to matter to xConnect
  • The trusting of the certificate is done using the thumbprint of the certificate (a hash of the certficate which uniquely identifies it). Note that the thumbprint will change when an expired certificate is renewed, so you will need to reconfigure xConnect after renewing a client certificate so that it trusts the newer thumbprint.
  • The xConnect server must trust the issuer of the client certificate

What can go wrong with client certificate negotiation

There are a lot of things that can go wrong with the client certificate, moreso than the server certificate. When troubleshooting, make your first step the Sitecore XP logs - they generally have some basic information about a bad client cert.

If you’re receiving HTTP 4xx responses

Chances are your client certificate validation failed. This could mean:

  • The client certificate is not installed on both the Sitecore XP server and the xConnect server (the xConnect server would only need the public key)
  • The client certificate is not considered trusted on the xConnect server
  • The certificate thumbprint configured in the xConnect server’s App_Config\ConnectionStrings.config is missing or incorrect. Note that the thumbprint must be all uppercase with no spaces or colons. If copied from certificate manager, an unprintable character might prefix the thumbprint - check for a hidden character there.
  • The certificate location configured in the xConnect server’s App_Config\ConnectionStrings.config is incorrect. Normally the certificate should be stored in local machine certificates and have a connection string similar to StoreName=My;StoreLocation=LocalMachine;FindType=FindByThumbprint;FindValue=THUMBPRINTVALUE.

“The certificate was not found”

This indicates one of two things:

  • The thumbprint is incorrect in the Sitecore XP server’s App_Config\ConnectionStrings.config file. Note that the thumbprint must be all uppercase with no spaces or colons. If copied from certificate manager, an unprintable character might prefix the thumbprint - check for a hidden character there.
  • The certificate location configured in the Sitecore XP server’s App_Config\ConnectionStrings.config is incorrect. Normally the certificate should be stored in local machine certificates and have a connection string similar to StoreName=My;StoreLocation=LocalMachine;FindType=FindByThumbprint;FindValue=THUMBPRINTVALUE.

System.Net.WebException: The request was aborted: Could not create SSL/TLS secure channel.

As long as the server certificate is valid, this message is most likely that the Sitecore XP server’s IIS app pool user account does not have read access to the client certificate’s private key. This access is needed so that the Sitecore XP server can encrypt communications using its client certificate.

To remedy this issue, open the local machine certificates (“Manage computer certificates” in a start menu search) on the Sitecore XP server. Find the client certificate (normally under Personal\Certificates). Right click it, choose All Tasks, then Manage Private Keys.... You should get a security assignment window like this:

Next, add your IIS app pool user to the ACLs and grant it Read permissions (as above). Remember if you’re using AppPoolIdentity (you should be, unless using a domain account for windows auth to SQL), you must select the account by choosing Local Computer as the search location, and enter IIS APPPOOL\MyAppPoolsName as the account name to find.

Still having issues? Well, you can also use the security audit log to find out which account is failing to get access, then add that account in the key ACLs above:

Local Development Tip

If you work at a Sitecore partner and will have multiple copies of Sitecore 9 running locally, this can cause issues if you issue a dedicated SSL server certificate to each site. This is because a given TCP port (e.g. 443, the default) can only have one SSL certificate bound to it. This precludes having multiple Sitecore 9 instances running together locally unless they share the same SSL certificate.

Wildcard certificates are perfect for this job. As long as you use the same top level suffix for all your dev sites (e.g. *.local.dev), you can use the same wildcard certificate for your server certificate for all dev sites. Note that IIS’ self-signed certificate generator will not create a wildcard certificate for you. You’ll have to use something else, like New-SelfSignedCertificate, to create one.

Important note: You must issue a wildcard for at least two segments of domain for it to be trusted. For example *.dev is bad, but *.local.dev is good.

Note that client certificates should be unique on each site, only the server certificate should be shared.

In the release version of Sitecore 9, you can also disable the requirement to use encryption with xConnect which can bypass a lot of debugging. Do not do this in production or else a herd of elephants will destroy you.

Advanced Debugging with Wireshark

It’s possible to watch the SSL negotiation at a TCP/IP level using a network monitor such as Wireshark. This can provide insights on why your setup is failing when error messages are less than optimal. For example I spent a couple days diagnosing what turned out to be private key security issues. I figured this out by using Wireshark and observing that the client was never sending its client certificate after the server requested it, and figuring out why that was.

To use Wireshark to watch SSL traffic, you’ll have to set it up to decrypt traffic. This guide walks you through setting up decryption on Windows with an exported private key.

If you’re tracing local dev traffic (e.g. from localhost to localhost, including using your machine’s DNS name) Wireshark will not capture that unless you install npcap instead of the default pcap packet capture software. Once npcap is installed, tell Wireshark to bind to the Npcap Loopback Adapter to see local traffic.

Here is a screenshot of the Wireshark capture where I diagnosed the client certificate security issue:

Good luck!

Where to find Sitecore documentation

The land of Sitecore documentation is becoming a bit crowded these days. While at Symposium, I heard some people say they didn’t know how to keep up on new documentation - so here’s what I know. No doubt I missed some resources too, but these are the ones I usually use and follow.

Official Docs

Sitecore Doc Site

This is the main place to find documentation for Sitecore, as well as Sitecore modules. It has a handy RSS feed of updated articles you should subscribe to.

Unfortunately the RSS feed is not entirely complete due to documentation microsites being proxied in under the main doc site (for example Commerce and the v9 Scaling Guide). These statically generated sites generally do not provide their own RSS feeds, and are thus harder to track updates to.

Sitecore Knowledge Base

The Sitecore KB lists known issues, support resolutions, security bulletins, and other support information. Like the main doc site, it has its own RSS feed of updated articles that is absolutely worth subscribing to.

Sitecore Helix Docs

Sitecore’s official architecture guidance has its own website. Unfortunately, no RSS feed of updates.

Sitecore JSS Docs

The JavaScript Services module has its own separate documentation site. Unfortunately, no RSS feed of updates.

Sitecore Dev Site

Where to go to actually download Sitecore releases and official modules such as SXA and PXM. There’s no RSS feed of new releases and updates, unfortunately.

Community-run Docs

Sitecore Blog Feed

A Sitecore-run blog aggregator that serves up a fresh helping of most major Sitecore blogs. Worth subscribing to via its RSS feed.

Sitecore StackExchange

A community-driven Q&A site that’s part of StackExchange. If you have a question about Sitecore, there are many highly active members who are happy to help here.

Sitecore Slack

Slack is a group messaging/discussion tool. The Sitecore Community Slack group has over 2,700 Sitecore developers with very active participation. If you do Sitecore, you should be here.

Unofficial Sitecore Training

Community run unofficial training videos that cover development practices that are commonly used, but not covered in official Sitecore training. More opinionated, influenced heavily by real-world implementation experiences.

Sitecore Community Docs

Unofficial documentation. Not updated that often any longer but still some good information, especially the article on config patching.

Sitecore Powershell Extensions Docs

The SPE documentation is so complete that it’s worth mentioning even though it’s for a single Sitecore module.

Unicorn 4 Released

unicorn

I’m happy to announce the final release of Unicorn 4.0! Unicorn 4 comes with significant performance and developer experience improvements, along with bug fixes. Unicorn 4 is available from NuGet or GitHub.

Project Dilithium and Performance

Unicorn 4 is faster - a lot faster. Check out these benchmarks:

performance benchmarks

The speed increase is due to optimized caching routines, as well as the Dilithium batch processors. Dilithium is an optional feature that is off by default: because of its newness, it’s still experimental. I’m using it in production though. Give it a try - it can always be turned off without hurting anything.

For more detail into how Unicorn 4 is faster, and what Dilithium does, check out this detailed blog post.

Improved Modular Configuration

Unicorn 4 features a refactored configuration system that is designed to support Sitecore Helix projects with an improved configuration experience. The new config system is completely backwards-compatible, but now enables configuration inheritance, configuration variables, and configuration extension so that modular projects can encode their conventions (e.g. paths to include, physical paths) into one base config and all the module configs can extend it.

This drastically reduces the verbosity of the module configurations, and improves their maintainability by allowing conventions to be DRY. Here’s a very simple example of a base conventions configuration:

1
2
3
4
5
6
<configuration name="Habitat.Feature.Base" abstract="true">
<targetDataStore physicalRootPath="$(sourceFolder)\$(layer)\$(module)\serialization" />
<predicate>
<include name="$(layer).$(module).Templates" database="master" path="/sitecore/templates/$(layer)/$(module)" />
</predicate>
</configuration>

And here’s a module configuration that extends it:

1
2
3
4
5
6
<configuration 
name="Feature.News"
extends="Habitat.Feature.Base">
<!-- automatically stores items at $(sourceFolder)\Feature\News\serialization -->
<!-- automatically includes /sitecore/templates/Feature/News -->
</configuration>

There’s a lot more that you can do with the configuration enhancements in Unicorn 4 too. For additional details, read this extensive blog post.

Sitecore PowerShell Extensions Support

unicorn + spe

Just about anything you can do with Unicorn can now be automated using Sitecore PowerShell Extensions in Unicorn 4. You can now run Unicorn SPE cmdlets to…

Console Output Scaling

The Unicorn console has received a serious upgrade in Unicorn 4. If you’ve ever run a sync that changed a large number of items from the Unicorn Control Panel, you may have noticed the browser slow to a crawl and the sync seem to almost stop. The console that underpins Unicorn 3 and earlier started to choke at around 500 lines.

No longer! Unicorn 4’s upgraded console has spit out 100,000 lines without a hitch, and it should scale beyond that.

The automated tool console (PowerShell API) has also received an upgrade. Previously the tool console buffered all the output of a sync before sending it back. This caused problems in certain environments, namely Azure, where TCP connections that don’t send any data for more than four minutes are terminated. This would cause long-running syncs in Azure to die unexpectedly.

In Unicorn 4 the automated tool console emits data in a stream just like the control panel console. There’s also a heartbeat timer where if no new console entries are made for 30 seconds, a . will be sent to make sure the connection is kept active.

The streaming tool console also requires updating your Unicorn.psm1 file - not only will you get defense against TCP timeouts, you’ll also be able to see the sync occur in real time using the PSAPI just like you would from the control panel. No more waiting until it’s done to see how things are going :)

Predicates can Exclude by Template ID or Name

Unicorn 4 can now exclude items from a configuration by template ID, thanks to Alan Płócieniak. See also Alan’s original post on the technique.

1
2
3
<include name="Template ID" database="master" path="/sitecore/allowed">
<exclude templateId="{3B4F2B85-778D-44F3-9B2D-BEFF1F3575E6}" />
</include>

You can also exclude items by a regular expression of their name. This enables scenarios such as wanting to include all templates, but exclude all __Standard values items.

1
2
3
<include name="Name pattern" database="master" path="/sitecore/namepattern">
<exclude namePattern="^__Standard values$" />
</include>

The complete grammar for predicates is always in the predicate test config.

Breaking Changes

Unicorn 4’s breaking changes do not break any common use-cases of Unicorn, but review them to see if they affect you.

  • The __Originator field is now serialized by default. This enables proper tracking of the origin of items instantiated from branch templates.
  • Multithreaded sync support has been removed due to Sitecore bugs preventing realistic use. This was disabled by default already. Dilithium is faster than multithread sync ever was.
  • The Rainbow UseLegacyAttributeFormatting (formats items in Unicorn < 3.1 format) setting has been removed. The new format is now always used. This has always been off by default.
  • Rainbow FieldComparers are no longer activated using the Sitecore Factory, so they only support parameterless constructors (this would only affect custom comparers; the stock ones have always been parameterless)
  • Due to the console upgrades, a new Unicorn.psm1 is required if you are using Unicorn’s PowerShell API. This file also now ships in the NuGet package, so you can be sure you’re getting the right version for your Unicorn.

Bug Fixes

  • Transparent sync misc fixes (to renaming, saving display names, instantiating branch templates into TpSync areas)
  • Renaming an item and changing fields on it in one operation (only possible with Sitecore API) now no longer loses the additional field data in the serialized item
  • Improved output and logging, clarified messaging, improved developer experience
  • Content editor warnings now handle items in more than one configuration correctly
  • Control panel now displays which paths are invalid when initial serialization is blocked due to invalid include paths
  • The required password length for user serialization is now configurable, should you really really want to use b
  • Using Fast Query while any Transparent Sync configuration is active will now log a warning to the Sitecore logs (fast query cannot find transparent sync items, so items may not be returned as expected). This can be disabled if your logs start to fill with spam and you understand the issue.
  • PowerShell API now defaults to not write secrets to the console (debug is off by default) for secure-by-default-ness
  • Fixed a background exception that could occur when modifying serialized items on disk in rare cases, which could cause the app pool to terminate #222
  • The Rainbow data cache now correctly invalidates if an item is moved or renamed on disk after being added to the cache
  • Choosing many configurations to sync will no longer push the sync button off the page #232
  • The default console logging level for interactive syncing has been changed from Info to Debug, since there is no longer a scaling issue with the console output. This provides better insights into what has been changed on items without needing to see the Sitecore logs

Upgrading

If you’re coming from classical Unicorn 3.1 or later, upgrading is actually really simple: just upgrade your NuGet package. Unicorn 4 changes nothing about storage or formatting (except that the __Originator field is no longer ignored by default), so all existing serialized items are compatible.

If you’re invoking Unicorn via its remote PowerShell API, make sure to upgrade your Unicorn.psm1 to the Unicorn 4 version to ensure correct error handling with the streaming console.

Thanks

Thank you to the community members who contributed code and bug reports to this release.

Address

Simplifying Contact Facets with C# 6

Contact Facets allow you to persist information about visitors into the Sitecore xDB. We’re not going to get into the theory behind them in this post; for that go read Pete Navarra’s great blog post that summarizes current practices and how to add facets.

Today we’re going to discuss how to syntactically improve the declaration of a contact facet class using syntaxes available in C# 6.0 (VS 2015) and C# 7.0 (VS 2017). It’s important to note that the C# version is decoupled from the .NET framework version: the C# 7.0 compiler is perfectly capable of emitting C# 7 syntax to a .NET 4.5-targeted assembly, for instance. So you can use these modern language features as long as you’ve got the right version of MSBuild or Visual Studio :)

Here’s the example Pete uses in his post, which follows other examples out there as well:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
using System;
using Sitecore.Analytics.Model.Framework;

namespace SitecoreHacker.Sandbox.Facets
{
[Serializable]
public class MarketingData: Facet, IMarketingData
{

private const string CUSTOMER_ID = "CustomerId";
private const string SEGEMENT = "Segment"; // sic :p

#region Properties
public string CustomerId
{
get { return GetAttribute<string>(CUSTOMER_ID); }
set { SetAttribute(CUSTOMER_ID, value); }
}

public string Segment
{
get { return GetAttribute<string>(SEGEMENT); }
set { SetAttribute(SEGEMENT, value); }
}
#endregion

public MarketingData()
{
EnsureAttribute<string>(CUSTOMER_ID);
EnsureAttribute<string>(SEGEMENT);
}
}
}

As you can see, the facet API requires string keys for the facet values - in this case stored as const string - to get and set them. Further, as Pete notes:

I found out the hard way that the constants defined, the value must equal the actual name of the class property for the same attribute.

Well in C# 6 (VS 2015), there’s a syntax for that. The nameof statement allows you to get the string name of a variable or property. This essentially hands off the management and maintenance of the const value to the compiler, instead of the developer.

So we can clean up this example by using nameof instead of constants - and get as a bonus refactoring support and compile-time validation of the names:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
using System;
using Sitecore.Analytics.Model.Framework;

namespace Elephant.Sandbox.Facets
{
[Serializable]
public class MarketingData: Facet, IMarketingData
{
public string CustomerId
{
get { return GetAttribute<string>(nameof(CustomerId)); }
set { SetAttribute(nameof(CustomerId), value); }
}

public string Segment
{
get { return GetAttribute<string>(nameof(Segment)); }
set { SetAttribute(nameof(Segment), value); }
}

public MarketingData()
{
EnsureAttribute<string>(nameof(CustomerId));
EnsureAttribute<string>(nameof(Segment));
}
}
}

Finally if you have C# 7.0 (VS 2017), you can also utilize expression bodied members to further clean up the property syntax:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
using System;
using Sitecore.Analytics.Model.Framework;

namespace Rhino.Sandbox.Facets
{
[Serializable]
public class MarketingData: Facet, IMarketingData
{
public string CustomerId
{
// expression bodied members turn the single-expression get
// into a lamdba-style syntax, removing the need for braces
get => GetAttribute<string>(nameof(CustomerId));
set => SetAttribute(nameof(CustomerId), value);
}

public string Segment
{
get => GetAttribute<string>(nameof(Segment));
set => SetAttribute(nameof(Segment), value);
}

public MarketingData()
{
EnsureAttribute<string>(nameof(CustomerId));
EnsureAttribute<string>(nameof(Segment));
}
}
}

So there - now go forth and put your data in the xDB :)