for await
) are a really cool feature of modern JavaScript that you can use when a loop requires asynchronous iteration. What the heck is that? Well let’s look at a regular for...of
loop:1 | const array = [1, 2, 3]; |
Nothing weird here, we’re looping over an array and printing each element. An array is a synchronous data structure, so we can loop over it very simply. But what about asynchronous data, like say the fetch
API to get data from a HTTP endpoint. A simple implementation of looping over that data is not much more complex:
1 | const res = await fetch("https://whatever.com/api"); |
We’re still using for...of
and being synchronous in our loop so let’s add one more factor: we’re using an API that uses pagination of some sort, which is pretty much every API ever made since it’s expensive to load giant datasets. A simple pagination-based iteration might look something like this:
1 | let data = []; |
This is sort of async iteration but it suffers from a major shortcoming: you have to comingle the iteration code and the data fetching in the same loop as each page is ephemeral - or else select all the pages and allocate a monster array before you can deal with the data. Neither of those are great options, so let’s try an async generator function instead. An async generator is a function that returns an async iterable that you can loop over. Like other iterable and enumerable types (such as IEnumerable
in C#), async iterables in JS are essentially an object with a next
function to get the next thing in the iterable. This has several interesting side effects:
For my own sanity I’m going to drop into TypeScript here to illustrate the types that we’re passing around :)
1 | // This paginateAsync function implements generic pagination functionality over an arbitrary API, |
So now you can call a paginated API and treat the result as if it were a non-paginated loop. Pretty neat, right? Even better, you can use this in all modern browsers. (IE ain’t modern, folks…)
Again for my C# readers: this is pretty similar conceptually to C#’s IAsyncEnumerable
construct and can be used in similar circumstances.
Anyhow today I had a monorepo that has more than one site that I wanted to deploy to Netlify. The repo looked something like this:
1 | MyRepo |
Netlify supports this but it’s not super well documented how to accomplish it without a little sleuthing.
Monorepo support works by setting the Base directory
of each site’s configuration to point at the relative path to your site root:
As it says on the tin, this essentially sets the path to cd
to before starting the build commands. (Find it here: https://app.netlify.com/sites/<your-site-name-here>/settings/deploys
)
But there’s one cool thing the description forgets to mention: You can also configure Netlify sites using a netlify.toml
file, making your configuration versioned in Git. This gets really useful to control the whole build stack from one place: configuring the build commands, redirects, setting up lambda functions. Netlify usually expects netlify.toml
in the root of the repository. However, the Base directory setting also changes where the netlify.toml
is expected to live. If we do this:
1 | MyRepo |
…then we get the lovely capability to both monorepo our sites, and also version our Netlify configuration. Awesome!
]]>So, let’s fix this by using a build container to create a standardized build environment where we can make our server bundle. Basically,
When a Dockerfile
executes, the state of each intermediate step is stored so that it need not be repeated every time the image rebuilds. But more importantly, you can also create intermediate build containers that live only long enough to perform a build step, then get thrown away except for some artifacts you want to persist on to some future part of the container build. In the case of our JSS image, the idea is something like this:
jss build
The build container that we use to build the JSS app in is thrown away after the build occurs, leaving only the lightweight production container with its artifacts. In this case, thanks to switching from node:lts
to node:lts-alpine
as the base container, the built container size shrunk from 921MB to 93MB.
Note that because the base image is stored as a diff, the image size reduction affects the initial download time of the image on a new host, but once the
node:lts
image is cached it really only changes the amount of static disk space consumed.
Adding a build container step involves adding a few lines to the top of the Dockerfile
from part 1:
# Create the build container (note aliasing it as 'build' so we can get artifacts later)FROM node:lts as build# Install the JSS CLIRUN npm install -g @sitecore-jss/sitecore-jss-cli# Set a working directory in the container, and copy our app's source files thereWORKDIR /buildCOPY jss-angular /build# Install the app's dependenciesRUN npm install# Run the buildRUN npm run build# Now, we need to switch contexts into the final container# lts-alpine is a lightweight Node container, only 90MB# When we switch contexts, the build container is supplanted# as the context containerFROM node:lts-alpine# ...# When we copy the app's source into the final container# we need to use --from=[tag] to get the files from our build container# instead of the local diskCOPY --from=build /build/dist /jss/dist/${SITECORE_APP_NAME}
To solve the issue of the Sitecore API URL and API Key being baked into the server and browser bundles by webpack during jss build
, we need to use tokenization. These values really do need to be baked into the file at some point, because the browser that executes them does not understand environment variables on your server or how to replace them - but, we should not need to re-run webpack every time a container starts up either.
We can work around this by baking specific, well-known tokens into the bundle files and then expanding those tokens when the container starts from environment variable values. The approach works something like this:
%sitecoreApiHost%
*.js
files to *.base
files. This means the container itself does not contain any JS in its /dist
. This is necessary so the container can generate the final files each time it starts up. (Since the same image can start many times with different environment variables present, it has to ‘rebake’ the JS each time)Doing this is a bit harder than just doing the build container. First, during the container build in the Dockerfile
:
# Before the build container runs the `build` command,# we need to set specific API key and host values to bake# into the build artifacts to replace later.RUN jss setup --layoutServiceHost %layoutServiceHost% --apiKey 309ec3e9-b911-4a0b-aa8d-425045b6dcbd --nonInteractiveRUN npm run build# After the build container runs the `build` command,# we need to move all the .js files it emitted to .base filesRUN find dist/ -name '*.js' | xargs -I % mv % %.base
With the updated Dockerfile
in place, the container we build will now have .base
files ready to specialize into the running configuration when the container starts up. But without any changes to the image itself, it would fail because we can’t run an app using .base
files! So we need to add a little script to the node-headless-ssr-proxy
to perform this specialization when it starts up inside a container. The specialization process:
.base
files to .js
file of the same name (make a runtime copy to use in the browser).js
files with the current runtime environment variables.js
files and run normallyI used bootstrap.sh
for the script name, but any name is fine.
1 | find dist/ -name '*.base' | while read filename; do export jsname=$(echo $filename | sed -e 's|.base||'); cp $filename $jsname; sed -i -e "s|%layoutServiceHost%|$SITECORE_API_HOST|g" -e "s|309ec3e9-b911-4a0b-aa8d-425045b6dcbd|$SITECORE_API_KEY|g" $jsname; done |
This script is a rather hard to read one-liner so let’s piece it out to understand it:
find dist/ -name '*.base' | while read filename
- Finds *.base
files anywhere under dist
, and reads each found filename into $filename
in a loop bodydo export jsname=$(echo $filename | sed -e 's|.base||')
- Sets $jsname
to the name of the found file, with the extension changed from .base
to .js
cp $filename $jsname
- copies the .base
file to the equivalent path, but using the .js
extension insteadsed -i -e "s|%layoutServiceHost%|$SITECORE_API_HOST|g" -e "s|309ec3e9-b911-4a0b-aa8d-425045b6dcbd|$SITECORE_API_KEY|g" $jsname
- uses sed
to perform a regex replace on the known values we baked into the base file, replacing them with the environment variables ($SITECORE_API_HOST
and $SITECORE_API_KEY
) that form the current runtime configuration for those valuesFinally we need to get this script to run each time the container starts up. There are several ways we could do this, but I elected to add an npm script to the headless proxy’s package.json
:
1 | "scripts": { |
…and then changed the entry point in the Dockerfile
to call the container entrypoint:
ENTRYPOINT npm run docker
The final step is to rebuild the container image so we can start it up, using docker build
.
The headless proxy Node app has always known how to read environment variables for the Sitecore API host and API key, but those have only applied to the SSR execution not the browser-side execution. With the modifications we’ve made, setting those same environment variables will now also apply to the browser. Doing this with Docker is quite trivial when booting the container, for example:
1 | docker run -p 3000:3000 --env SITECORE_API_KEY=[yourkey] --env SITECORE_API_HOST=http://your.site.core [container-image-name] |
For more clarity, here’s the full contents of the Dockerfile
with all these changes made:
1 | FROM node:lts as build |
In this episode, we have improved the JSS headless container build process by running all of the build inside containers for improved repeatability and tokenized the browser JS bundles so that the same container can be deployed to many environments with different API hosts without needing a rebuild. What’s next? Orchestrating multiple instances with Kubernetes.
]]>The best way to understand containers quickly is, of course, a meme.
Another way to think of a container is a lightweight virtual machine. Unlike a VM, a container shares much of its system with the host OS or node. This means that containers:
node
container and deploy JSS to it - thus, we offload the maintenance of the base container to the Node maintainers, and we take on the maintenance of only our app.Containers have become incredibly popular as a way to build and deploy applications because of their consistency and low resource usage. Especially as more applications take on more server-based dependencies (i.e. microservice architectures, or even a traditional app that may need a database, search service, etc), containers provide a reasonable way to replicate such a complex IT infrastructure on a developer machine in the same way that it runs in production - without each developer needing to have a 1TB RAM, 28-core server to run all those virtual machines.
So with that in mind, what if we wanted to containerize Sitecore JSS’ headless mode host?
Note: we’re only containerizing the JSS SSR host in this post; the rest of the Sitecore infrastrucure would still need to be deployed traditionally.
If you’re planning to follow along at home with this build, note that you’ll need to install Docker Desktop in order to be able to locally build and run the containers. You may also need to enable virtualization in your UEFI, if it’s off, or potentially for Windows also enable Hyper-V and Containers features at an OS level. Consult the Docker docs for help with that :)
When you create a container, there are three main tasks:
Containers are built on top of other containers in an efficient and lightweight way. This means that for example, your container might start with a Windows Server container, or an Ubuntu container…or it might start from a Node container, that was based on an Debian container. You get the idea - containers, like ogres or ‘90s software architecture, have layers. Each layer is built as a diff from the underlying layer. When you make a container, you’re adding a layer.
In our case, JSS headless SSR is a Node-based application, so we will choose the Node container as our base.
Dockerfile
The dockerfile is a file named Dockerfile
that defines how to create your container. It defines things like:
FROM node:lts
)In our case we want to start from the node
container:
FROM node:lts
Then we want to tell Docker how to deploy our JSS app on top of the Node container. We do this by telling it which files we want to copy into the container image and where to put them, as well as any commands that need to be run to complete the setup:
# We want to place our app at /jss on the container filesystem# (this is a fairly arbitrary choice; # use something app-specific and don't use '/')# Subsequent commands and copies are relative to this directory.WORKDIR /jss# Specify the _local_ files to copy into the container;# in this case a copy of the headless SSR proxy: https://github.com/Sitecore/jss/tree/dev/samples/node-headless-ssr-proxyCOPY ./node-headless-ssr-proxy /jss# Run shell commands _inside the container_ to set up the app;# in this case, to install npm packages for the headless Node app.# NOTE: the container is built on the Docker server, not locally!# Commands you run here run inside the container, and thus # cannot for example reference local file paths!RUN npm install# To run JSS in headless mode, we also need to deploy # the JSS app's server build artifacts into the container# for the headless mode proxy to execute. This is another copy.COPY my-jss-app-name/dist /jss/dist/my-jss-app-name# When the container starts, we have to make it do something# aside from start - in this case, start the JSS app.# The command is run in the context of the WORKDIR we set earlier.ENTRYPOINT npm run start# The JSS headless proxy is configured using environment variables,# which allow us to configure it at runtime. In this case,# we need to configure the port, app bundle, etcENV SITECORE_APP_NAME=my-jss-app-name# Relative to /jss path to the server bundle built by the JSS app build# Note: this path should be identical to the path deployed for integrated# mode, so that path references work correctly.ENV SITECORE_JSS_SERVER_BUNDLE=./dist/${SITECORE_APP_NAME}/server.bundle.js# Hostname of the Sitecore instance to retrieve layout data from.# host.docker.internal == DNS name of the docker host machine, # i.e. to hit non-container localhost Sitecore dev instanceENV SITECORE_API_HOST=http://host.docker.internalENV SITECORE_API_KEY=GUID-VALUE-HERE# Enable or disable debug console output (dont use in prod)ENV SITECORE_ENABLE_DEBUG=false# Set the _local_ port to run JSS on, within the container# (this does not expose it publicly)ENV PORT=3000# Tell Docker that we expose a port, but this is for documentation;# the port must be mapped when we start the container to be exposed.EXPOSE ${PORT}
Once we have defined the steps necessary to create the container image, we need to build the container. Building the container:
Dockerfile
directory and uploads them to the Docker host (unless listed in a .dockerignore
file)Dockerfile
script within the container to configure itThe
Dockerfile
does not execute locally, so make sure you don’t make that assumption when usingEXEC
directives; execution also occurs within the container being built, so it occurs in the context of the container (in this case, Debian) and the dependencies that are part of the container.
To build your JSS container, within the same folder as your Dockerfile
run:
docker build -t your-image-name .
Once the build is done, you can find your image on Docker using:
docker images
Up to this point we have collected and built the container, but nothing has been run. To create a new instance of your container and start it up, run
docker run -p 3000:3000 --name <pick-a-name-for-container-instance> <imagename>
The -p
maps your localhost port 3000 to the container port 3000 (which we specified the Node host to run on previously using an environment variable).
Once you start the container, visiting http://localhost:3000
should run the app in the JSS headless host container.
docker ps
command. If a container was started without an explicit --name
, this can help find it.docker exec
command lets you run commands, including starting a shell - for example, docker exec -it <container-name> bash
. The -it
says you want an interactive TTY (in other words an ongoing shell, not a one-off command execution and exit)In this post, we’ve created and run a Docker container of the JSS headless mode. This works great for a single container, but for production scenarios we would likely need to orchestrate multiple instances of the container to handle heavy load and provide redundancy. Next time, we will improve our container build script using a build container, then finally the series will end with orchestrating the container using Kubernetes.
]]>The next version of .NET Core will be released in September 2019. It will feature a raft of improvements, notably WPF/desktop app support (Windows only), .NET Standard 2.1 (not going to be supported by .NET 4.x ever), and C# 8 (.NET Standard 2.1 required).
Coinciding with the release of .NET Core 3 will be dotnetconf from September 23-25, a virtual conference highlighting .NET Core 3.
.NET Core 3.1, the long term support version, is slated to ship in November.
After .NET Core 3 ships, .NET Core is dead. Instead .NET 5 will ship, and it will unify the abstraction of .NET Standard into a universal BCL that can run on any .NET 5 compatible runtime (i.e. Xamarin, Mono, Windows .NET). It will also gain Java and Swift interop capabilities (from Mono/Xamarin) on all platforms. The idea is that .NET 5 will be a singular platform that runs anywhere from mobile devices, to IoT/Raspberry Pi, to desktop apps, to cloud server(less).
Web Forms and WCF will never be ported to .NET Core/.NET 5. Specifically for Web Forms, Blazor will be the recommended migration path.
Following .NET 5, the .NET platform will have yearly releases (.NET 6, 7, 8, …). Alternating years will be LTS versions, in other words 2020’s .NET 5 will be supplanted by the LTS .NET 6 in 2021.
Note: C# 8 requires compiler changes that need .NET Standard 2.1+. In other words, C# 8 can only be used with .NET Core 3 and later as a consumer!
The main focus of C# 8 is “robustness.” There are a number of new features that support this goal:
Ever since async/await was shipped in C#, it’s been problematic to use it with enumerables because you must await either Task<IEnumerable<T>>
(thus awaiting the WHOLE enumerable, which loses its lazy enumeration advantages), or IEnumerable<Task<T>>
which potentially requires awaiting in a loop, which is also suboptimal. It also prevents the use of yield return
in async methods, which makes them significantly less pretty.
In C# 8, this is fixed by introducing IAsyncEnumerable<T>
, an asynchronously enumerable enumerable type. This type is enumerated using await foreach
, i.e. await foreach(var t in asyncEnumerable) { /* where t is not a task */ }
. The implementation of IAsyncEnumerable
is simply allowed to yield return
values, giving the enumerator control over its own internal asynchrony needs, batching, etc.
The NullReferenceException
is everyone’s favorite C# bugbear, and solutions good and bad abound for asserting that method arguments are not null to avoid throwing them (my favorite is var x = arg ?? throw new ArgumentNullException(nameof(arg));
). In C# 8, you can explicitly declare reference types as nullable, explicitly stating that a method can return - or accept - a null value. Doing this allows the compiler to remove the need for all those assertions, as it can warn you at compile time if you’re not checking a nullable type for null before using it. This is an opt-in feature either with #nullable enable
in a file, or it can be turned on per-project.
1 | Item? GetItem() { |
A common need is to parse a string or array and split it up into pieces by index; for example “this string’s last two characters” or “the first 5 elements in this array.” This sort of code is quite vulnerable to naughty data input causing exceptions, for example "a".Substring(5)
will throw because it isn’t 5 characters long.
C# 8 range expressions allow you to concisely and safely (they won’t throw if the array is shorter than the slice) express this sort of problem. They work using ^
to anchor the range to “length - x” or “start + x”, a spread, and an optional endpoint. A few examples:
1 | var str = "hello world"; |
The switch
statement receives an upgrade in C# 8 with the ability to assign it directly to a variable, eliminating the need for clumsy break
statements in every case. It’s also possible to use pattern matching with this format (not pictured).
1 | var result = "hello" switch { |
Interfaces can have default implementations for members. This is not intended to kill IoC containers as much as be a tool for API creators to ship additions to public interfaces without breaking existing consumers of that interface. The additional members need only be optionally implemented by downstream consumers, with the defaults used if not overridden.
The using
statement gets an overhaul to avoid needing a block scope. A using statement is used to prevent forgetting to dispose IDisposable
resources, but before C# 8 it required a block scope of its own which especially in nested usings made things hard to read. In C# 8, you can define a variable with the using
keyword and no block scope, and it is implicitly disposed at the end of the current block scope. For example:
1 | public void Foo() { |
In TypeScript 3.4 - currently RC - you can enable incremental builds (via tsconfig, or --incremental
to the CLI), which allows TS to cache the output of the last build/watch run and essentially ‘rehydrate’ it during the next build to avoid rebuilding unchanged modules. The upshot of this on the VS Code codebase is that warm build times went from 47 seconds to 11 seconds.
On larger TypeScript codebases, using project references can allow TypeScript to partition compilation units, enabling it to only rebuild changed units in the dependency tree (about like projects and solutions in Visual Studio). This can be used to avoid needing to recompile an entire TypeScript project every time even without incremental builds.
New modernized TypeScript documentation, with content oriented around current TypeScript practices and improved clarity, is in process. Current target is late 2019 to release the new docs.
Using Live Share developers can collaborate effectively while remote, with either Visual Studio, VS Code, or both. It’s a bit like a code-specific combination of screen sharing and collaborative editing. This includes things like:
Code can now connect to a remote system (via SSH or directly to a container) and edit the remote instance as if it were local files. This includes things like installing Code plugins on the remote environment - it’s basically connecting to a “headless” VS Code service. For example, you could write Ruby code on a Docker container in AKS from a windows machine running Code…without needing to set up a Ruby dev environment locally or install any Ruby plugins into Code. Or, do .NET Core dev on a remote VM without needing to install the .NET Core SDK locally.
Remote editing really shines when combined with Azure Dev Spaces (read on…).
@:
in the command palette (i.e. @:myfunc
)if(arrayVar.
might suggest Length
but stringVar.
might suggest Split
. The suggestions model was trained on 2000 of the most popular open source codebases on GitHub, so they’re based on actual community practices.Code tips session
Visual Studio tips session
Visual Studio debugger/diags tips
It’s no secret that microservice architectures can be pretty difficult to develop locally. Especially if they tend towards the distributed monolith antipattern ;) Well, Azure decided to do something about that. Probably the coolest demo of the whole event.
Dev Spaces is a prebuilt microservice-oriented workflow for developers based on Azure Kubernetes Service (AKS). The basic concept is that a dev team would share an AKS cluster across their whole team - because developers would probably be working on a few microservices, not the whole galaxy of the system, they can then basically “branch” specific microservices out for personal development, while referencing the rest of the system built from the latest CI build. In other words, there’s no need to mock or setup local microservices that you don’t care about, because yours runs in AKS and refers to the master build.
Even more bonkers, you can use remote development to debug and auto-deploy files to your personal microservices running in AKS. Pull requests can be made to similarly build in their own namespace, giving faster and more efficient use of build time. Seems like a pretty darn nice experience, with most of the orchestration issues no longer your problem.
Watch the session
Documentation on Dev Spaces
You can define your Azure DevOps build and release pipelines using YAML files that can be committed to the repository. This allows the build system to be versioned and stable across branches and enables proper testing of changes to the build via PRs. In preview now, release pipelines (in addition to builds) can be defined in YAML. There is also a visual editor that allows generation of YAML for common tasks using a GUI.
Coming soon, pipelines will be able to automatically generate a Kubernetes manifest and Helm chart for any project with a Dockerfile. This will also generate appropriate build YAML to allow building docker images, deploying them to Azure Container Registry, and spinning up the Kubernetes cluster from those images on Azure Kubernetes Service. Looks really easy to use, and a definite lowering of the barrier to entry to Kubernetes deployment. The pipeline doesn’t only support Azure k8s either; it can deploy to other container registries or k8s clusters on premise or in other clouds.
Azure search is gaining the ability to apply cognitive services to data being indexed; for example it can index the contents of images as detected by cognitive services, etc. Also the 1000-field limit that Sitecore users love is being investigated and may be raised or eliminated in a near term time frame.
A library to build and run machine learning models from within .NET. Can consume several trained model formats, including TensorFlow, which lets you integrate ML models built by a data scientist using Python et al into .NET runtimes. Microsoft is also working on the ONNX model format for interoperable models. While it’s capable of training its own models too, it is interesting to see the model promoted where data scientists do their modeling using mainstream ML tools (i.e. Python), and deploy only the trained model to the .NET application. Promising in terms of integrating .NET with mainstream data scientists.
The AutoML toolkit was also announced. AutoML is a nice looking tool for non-data-scientists to take a dataset and automatically discover a good ML algorithm and hyperparameter set to produce an accurate model. Definitely aimed at the backend developer looking to add a splash of ML to their toolset, as opposed to data scientists, but this seems like it could significantly lower the barrier to ML entry for .NET developers.
More on ML.NET and AutoML here
Go forth and code, me hearties.
]]>Maybe you don’t want that to happen, because you like the fluidity of single-page apps or want to reduce bandwidth. Excellent! You’ve come to the right place.
The following examples use React, but the same architectural principles will translate well to Vue or Angular apps and the JSS field data schema is identical.
There are two places where we can receive links back from Sitecore:
Sitecore supports content fields that are explicitly hyperlinks (usually General Link fields, also referred to as CommonFieldTypes.GeneralLink
in JSS disconnected data). When returned these fields contain link data (a href
, optionally body text, CSS class, target, etc). In JSS apps, these are rendered using the Link
component like so:
1 | import { Link } from '@sitecore-jss/sitecore-jss-react'; |
This gives us normal anchor tag output in the DOM:
1 | <a href="/path">Link Text</a> |
But in react-router
, a link needs to be rendered using react-router-dom
‘s Link
component instead, for example:
1 | import { Link } from 'react-router-dom'; |
To make JSS general links render using react-router
links for internal links, we can create a component that conditionally chooses the link component like this:
1 | import React from 'react'; |
With this component, now your internal link values will be turned into router links and result in only a new fetch of route data instead of a page refresh!
Rich Text fields are a more interesting proposition because they contain free text that is placed into the DOM, and we cannot inject RouterLink
components directly into the HTML blob. Instead we can use React’s DOM access to attach an event handler to the rich text markup after it’s rendered by React that will trigger route navigation.
Similar to the general link field handling, we can wrap the JSS default RichText
component with our own component that selects whether to bind the route handling events based on whether we’re editing the page or not:
1 | import React from 'react'; |
Now internal links entered in rich text fields will also be treated as route links.
These examples use simple internal link detection that consists of “starts with /
.” There are some edge cases that can defeat simple link detection, such as:
//google.com
) that are HTTP or HTTPS depending on the current page. These are an antipattern; encrypt all your resources.For use cases such as this, more advanced detection of internal links may be required that is situational for your implementation.
]]>Imagine a large Sitecore JSS application, with a large number of JavaScript components. With the default JSS applications the entire app JS must be deployed to the user when any page in the application loads. This is simple to reason about and performs well with smaller sites, but on a large site it is detrimental to performance if the home page must load 40 components that are not used on that route in order to render.
Code Splitting is a term for breaking up your app’s JS into several chunks, usually via webpack. There are many ways that code splitting can be set up, but we’ll focus on two popular automatic techniques: route-level code splitting, and component-level code splitting.
Route-level code splitting creates a JS bundle for each route in an application. Because of this, it relies on the app using static routing - in other words knowing all routes in advance, and having static components on those routes. This is probably the most widespread code splitting technique, but it is fundamentally incompatible with JSS because the app’s structure and layout is defined by Sitecore. We do not know all of the routes that an app has at build time, nor do we know which components are on those routes because that is also defined by Sitecore.
Component-level code splitting creates a JS bundle for each component in an application. This results in quite granular bundles, but overall excellent compatibility with JSS because it works great with dynamic routing - we only need to load the JS for the components that an author has added to a given route, and they’re individually cacheable by the browser providing great caching across routes too.
The react-loadable library provides excellent component-level code splitting capabilities to React apps. Let’s add it to the JSS React app and split up our components!
react-loadable
We need some extra npm packages to make this work.
1 | // yarn |
componentFactory
use code splittingIn order to use code splitting, we have to tell create-react-app
(which uses webpack
) how to split our output JS. This is pretty easy using dynamic import
, which works like a normal import
or require
but loads the module lazily at runtime. react-loadable provides a simple syntax to wrap any React component in a lazy-loading shell.
In JSS applications, the Component Factory is a mapping of the names of components into the implementation of those components - for example to allow the JSS app to resolve the component named 'ContentBlock'
, provided by the Sitecore Layout Service, to a React component defined in ContentBlock.js
. The Component Factory is a perfect place to put component-level code splitting.
In a JSS React app, the Component Factory is generated code by default - inferring the components to register based on filesystem conventions. The /scripts/generate-component-factory.js
file defines how the code is generated. The generated code - created when a build starts - is emitted to /src/temp/componentFactory.js
. Before we alter the code generator to generate split components, let’s compare registering a component in each way:
1 | // static import |
1 | import React from 'react'; |
In order to have our component factory use splitting, let’s update the code generator to emit react-loadable
component definitions.
Modify /scripts/generate-component-factory.js
:
1 | // add this function |
You can find a completed gist of these changes here. Search in it for [CS]
to see each change in context. Don’t copy the whole file, in case of future changes to the rest of the loader.
Start your app up with jss start
. At this point Code Splitting should be working: you should see a JS file get loaded for each component on a route, and a short flash of Loading...
when the route initially loads.
But it still has some issues that could make it more usable. If the app is server-side rendered in headless or integrated modes none of the content will be present because the dynamic imports are asynchronous and have not resolved before the SSR completes. We’d also love to avoid that flash of loading text if the page was server-side rendered, too. Well guess what, we can do all of that!
Server-side rendering with code splitting is a bit more complex. There are several pieces that the app needs to support:
<script>
tags to preload the used components’ JS files on the client side into the SSR HTML.The build of the server-side JS bundle is separate from the client bundle. We need to teach the server-side build how to compile the dynamic import expressions. Open /server/server.webpack.config.js
.
1 | // add these after other imports |
You can find a completed gist of these changes here. Search in it for [CS]
to see each change in context. Don’t copy the whole file, in case of future changes to the rest of the webpack config.
The /server/server.js
is the entry point to the JSS React app when it’s rendered on the server-side. We need to teach this entry point how to successfully execute SSR with lazy loaded components, and to emit preload script tags for used components.
1 | // add to the top |
You can find a completed gist of these changes here with better explanatory comments. Search in it for [CS]
to see each change in context. Don’t copy the whole file, in case of future changes to the rest of the entry point.
The /src/index.js
is the entry point to the JSS React app when it’s rendered on the browser-side. We need to teach this entry point how to wait to ensure that all preloaded components that SSR may have emitted to the page are done loading before we render the JSS app the first time to avoid a flash of loading text.
1 | // add to the top |
You can find a completed gist of these changes here. Search in it for [CS]
to see each change in context. Don’t copy the whole file, in case of future changes to the rest of the entry point.
With the code changes to enable splitting complete, deploy your app to Sitecore and try it in integrated mode. You should see the SSR HTML include a script tag for every component used on the route, and the rendering will wait until the components have preloaded before showing the application. This preloading means the browser does not have to wait for React to boot up before beginning load of the components, resulting in a much faster page load time.
The ideal component loading technique for each app will be different depending on the number and size of each component. Using the standard JSS styleguide sample app, enabling component code-splitting like this resulted in transferring almost 40k less data when loading the home page (which has a single component) vs the styleguide page (which has many components). This difference increases with the total number of components in a JSS app - but for most apps, code splitting is a smart idea if the app has many components that are used on only a few pages.
In a word, previewing. Imagine during early development and prototyping of a JSS implementation. There’s a team of designers, UX architects, and frontend developers who are designing the app and its interactions. In most cases, Sitecore developers may not be involved yet - or if they are involved, there is no Sitecore instance set up.
This is one of the major advantages of JSS - using disconnected mode, a team like this can develop non-throwaway frontend for the final JSS app. But stakeholders will want to review the in-progress JSS app somewhere other than http://localhost:3001
, so how do we put a JSS site somewhere shared without having a Sitecore backend?
Wondering about real-world usage?
The JSS docs use this technique.
Running a disconnected JSS app is a lot like headless mode: a reverse proxy is set up that proxies incoming requests to Layout Service, then transforms the result of the LS call into HTML using JS server-side rendering and returns it. In the case of disconnected deployment instead of the proxy sending requests to the Sitecore hosted Layout Service, the requests are proxied to the disconnected layout service.
To deploy a disconnected app you’ll need a Node-compatible host. This is easiest with something like Heroku or another PaaS Node host, but it can also be done on any machine that can run Node. For our example, we’ll use Heroku.
Any of the JSS sample templates will work for this technique. Create yourself a JSS app with the CLI in 5 minutes if you need one to try.
scjssconfig.json
in the root. This will make the build use the local layout service.jss build
. This will build the artifacts that the app needs to run.yarn add @sitecore-jss/sitecore-jss-proxy express
(substitute npm i --save
if you use npm instead of yarn)/scripts/disconnected-ssr.js
(or similar path). Note: this code is set up for React, and will require minor tweaks for Angular or Vue samples (build
-> dist
) 1 | const express = require('express'); |
node ./scripts/disconnected-ssr.js
. Then in a browser, open http://localhost:8080
to see it in action!Heroku is a very easy to use PaaS Node host, but you can also deploy to Azure App Service or any other service that can host Node. To get started, sign up for a Heroku account and install and configure the Heroku CLI.
scripts
section in the package.json
1 | "postinstall": "npm run build"` |
Procfile
1 | web: node ./scripts/disconnected-ssr.js $PORT |
1 | git init |
1 | heroku create <your-heroku-app-name> |
devDependencies
(which we need to start the app in disconnected mode). Run the following command: 1 | heroku config:set NPM_CONFIG_PRODUCTION=false YARN_PRODUCTION=false |
1 | git push -u heroku master |
https://<yourappname>.herokuapp.com
!]]>In case it’s not obvious, do not use this setup in production. The JSS disconnected server is not designed to handle heavy production load.
XSLT 3.0 allows for JSON transformations, so you can use the full power of modern JSS while retaining the XSLT developer experience that Site Core developers know and love.
Our XSLT 3.0 engine allows for client-side rendering by transforming hard-to-read JSON into plain, sensible XML using XSLT 3.0 standards-compliant JSON-to-XML transformations. Instead of ugly JSON, your JSS renderings can simple, easy to read XML like this:
1 | <j:map xmlns:j="http://www.w3.org/2013/XSL/json"> |
It’s just as simple to make a JSS XSLT to transform your rendering output. Check out this super simple “hello world” sample:
1 |
|
JSON transformations allow XSLT 3.0 to be a transformative force on the modern web. Expect to see recruiters demand 10 years of XSLT 3.0 experience for Site-core candidates within the next year - this is a technology you will not want to miss out on learning.
Modern JavaScript is way too difficult, so we’ve implemented a feature that lets you define dynamic XSLT templates using ultra-modern VBScript:
1 |
|
With a quick piece of simple VBScript like that, you’ll be making awesome JSS pages like this one in no time!
Glad you asked. Download it right here!
The Sitecore JSS team is always looking for opportunities to improve JSS and make it compatible with the most modern technologies. Experimentation is already under way to add ColdFusion scripting support for XSLT 3 JSS renderings, and enable PHP server-side rendering for your SiteCore solutions.
Just another way that we help you succeed in your Site core implementations.
]]>This is the part where you might be expecting me to announce some crazy script I wrote, but not this time because someone else already did the work. So let’s address the elephant in the room.
Back in the day I wrote some scripts to install Solr using Bitnami. It worked, but I’d always wanted to find the time to make it simpler and less dependent on Bitnami and their notoriously hard to find older versions. Well Jeremy Davis did exactly what I wanted to do and scripted the whole Solr install, locally trusted SSL certificate, and installation as a service. You can also just skip straight to the gist of the PowerShell you need to run.
Seriously, it’s awesome and you should use it especially for local dev setups.
A few things I noted when I used it:
$SolrHost
value to your hosts file before you run the script so that it can resolve with the SSL certificate correctly (it will be bound to that name; don’t use localhost
).SIF is a pretty amazing tool, but it has two shortcomings: one, that it’s great for automated infrastructure but not so great for a quick local setup and two, that it doesn’t yet have an uninstall feature. Well Rob Ahnemann wrote a handy GUI for SIF called SIFless that fixes both of those issues, making quick setups with mostly default settings easy and generating hackable SIF PowerShell scripts that let you do whatever advanced things you want after using the GUI to get started. And it generates uninstall scripts too that get rid of the windows services, solr cores, and other artifacts that are left when you want to tear down that test site.
A few things to be aware of with SIFless:
https://mysolr:8983
, but https://mysolr:8983/solr
)C:\solr\solr-6.6.2
)Using these two tools I went from having no Solr and no Sitecore installations to having a fully operational battle station Sitecore 9 instance with xConnect in about 45 minutes. And that includes debugging my own silly mistakes. I bet you can do it faster. Get thee to a PowerShell console!
Now normally this wouldn’t merit a whole blog post, and we’d just let the recruiters find out about it on LinkedIn. But I’m sure many folks’ next question would be around all the libraries that I maintain and what will happen to them. So let’s address the elephant in the room:
These will continue exactly as they are today as independent, community driven projects. I will still be the maintainer. The license will remain MIT.
This also includes the dependency libraries that these projects use (e.g. Configy, WebConsole, MicroCHAP).
Ok hold up: let’s first define what Leprechaun is because I haven’t publicly spoken about it yet. It’s a stable command-line code generator that works from Rainbow serialized items. Kinda like the T4 templates that a lot of people use except that it’s better because:
Leprechaun is currently working in production on a couple sites, but does not have complete documentation so it may require a bit more spelunking to use. Currently it supports Synthesis out of the box, but it’s easy to add or change code generation templates.
Ok back to what’s happening to these projects. For the last year or so it’s been difficult to come up with the time and inclination to give Synthesis and Leprechaun the love they deserve. In order to get them that love, I am ceding maintainership to the excellent Ben Lipson. Ben is talented developer and Sitecore MVP with a lot of good ideas about where to take these tools. He’ll do a great job.
Aside from transferring the repositories to Ben, nothing else is changing.
No. #venting
4lyfe.
I’ll be on Team X, led by the illustrious Alex Shyba. In other words, if I told you I’d have to kill you.
/giphy #magic8ball "Will this be awesome?"
It’s been tested on standalone as well as Bitnami Solr. The script requires Windows 10 to use the Import-PfxCertificate
cmdlet; if you don’t have that you can remove the trust scripting and do it manually.
xConnect is noteworthy because it introduces client certificate authentication for the Sitecore XP server to communicate with xConnect. Certificates are a complex subject, and can fail in any number of less than helpful ways. This post aims to help you understand how certificates work in Sitecore 9, and provide you some tools to diagnose what’s wrong when they are not working right.
In order to understand how xConnect works, it’s important to understand what’s going on: Transport Layer Security (TLS). You may also think of this as “SSL” or “HTTPS.”
TLS is a protocol for establishing secure encrypted connections between a server and a client. The key aspect of TLS is that the client and server can securely exchange encryption keys in such a way that they cannot be observed by malicious parties that may be watching the exchange.
To understand how TLS works, it’s important to understand the distinction between Asymmetric (also called Public Key) Encryption, and Symmetric Encryption.
If you ever made secret codes as a kid, you’ve probably used symmetric encryption. This is where the sender and receiver both need to know a key to decrypt the message, for example a simple shift cipher where D
= A
, E
= B
, and so forth. Julius Caesar famously sent secret messages by shifting letters three places forward like this. Symmetric encryption does have one major downfall, however: posession of the secret key lets you read any encrypted message even if not the intended recipient.
Asymmetric encryption on the other hand uses two different keys: a public key and a private key. The public key can be shared with anyone without compromising anything. However a client can use the public key to encrypt a message in such a way that it can only be decrypted with the server’s private key. In this way, you can receive private encrypted messages from clients you don’t share any secrets with - but they can still send the server private messages.
TLS uses asymmetric encryption to transfer an encryption key for symmetric encryption, which is used for ongoing data transfer over the encrypted connection. This is done because asymmetric encryption is much much slower than symmetric.
It’s important to understand the difference between public and private keys when you set up Sitecore 9, because they need to be deployed to different servers in your infrastructure. A certificate generally includes both a public and private key, however it can also include only a public key.
xConnect uses mutual authentication to secure the connections between it and the Sitecore XP server. This is accomplished using TLS client certificates.
If you’ve worked with SSL certificates before, this is a stronger form of SSL where not only does the client have to trust the server, but the server also has to trust a second certificate issued to the client. In this case, the client is the Sitecore XP server, and the server is the xConnect server. Let’s take a look at how this works:
All SSL connections go through this process, whether xConnect or otherwise. In a standard Sitecore 9 XP installation, the xConnect server will have the server certificate installed. The Sitecore XP server will only have a server certificate if access to Sitecore itself, e.g. for administration, is done via SSL (in which case it will likely be a separate server certificate from xConnect’s).
https://xconnect
)The most common issues are domain mismatches and untrusted certificates. Generally you can diagnose issues with server certificates using a web browser - request the site over HTTPS and review the error shown in the browser. Make sure you request the xConnect server URL, not the Sitecore XP URL if you are diagnosing an xConnect connectivity issue.
A domain mismatch occurs when a certificate’s domain does not match the domain being requested. For example, a certificate issued to sitecore.net
will fail this validation if the site you’re requesting is https://foo.local
. Certificates may also be issued using wildcards (e.g. *.sitecore.net
). Note that wildcards apply to one level of subdomains only - so in the previous example sitecore.net
or foo.sitecore.net
would be valid, but bar.foo.sitecore.net
would not be.
Domain matching is done based on the host header the server receives. For example if the xConnect server is https://xconnect
but can also be accessed via https://127.0.0.1
, the certificate will be invalid if the IP address is used because the certificate was not issued for 127.0.0.1
.
If you have a domain mismatch issue, you will need to either get a new certificate (and update the xConnect IIS site(s) to use the new certificate) or change the domain for xConnect to one that is valid for the certificate.
To understand trust issues, it’s important to understand how certificates are issued. Certificates are issued by other certificates.
In fact, certificates can be issued in chains (Xzibit would definitely approve). Trust issues occur when the certificate that issued the server certificate is not considered to be trusted by the client. On Windows, trust is established by being included in the Trusted Root Certification Authorities in the machine certificates:
Note that to trust a certificate, only the public key for the server certificate must be imported here. If you’re using self-signed certificates that issued themselves - like localhost
in the screenshot - you can add the certificate itself to the trusted root certificates by exporting it and reimporting it into the root certificates. If using a commercially issued certificate, that certification authority’s root certificates must be added to the trusted root - in most cases, they are already present.
There are some less common issues that can also cause server certificate negotiation errors. Servers will be commonly secured against supporting vulnerable ciphers, hash algorithms or SSL protocol versions. You might have heard of Heartbleed or POODLE vulnerabilities, or had to support TLS 1.2 if working with some web APIs such as SalesForce. This is a good idea, but if the server and client cannot mutually agree on a supported cipher, hash, and protocol version the connection will fail. If the certificate is trusted and has the correct domain, this would be the next thing to check.
If you’ve never heard of this before, you can secure your IIS servers using a tool like IISCrypto. Go do it now, this post will wait.
Note that the .NET HTTP client with framework versions prior to 4.6.2 defaults to only supporting TLS up to 1.1. Many modern security scripts will disable all TLS protocol versions except for 1.2, which will cause HTTP requests from clients with earlier versions of the .NET framework installed to fail.
Hopefully now you have a decent idea of how server certificates work. But xConnect also uses client certificates. A client certificate enables mutual authentication. With only a server certificate, the client must decide to trust the server but the server has no way to know if it should trust the client. Enter client certificates.
A client certificate is essentially the opposite of the server certificate. When using a client certificate, the negotiation works similarly to the server certificate, except that when the server sends the ServerHello
(#3 above) it requests a client certificate in addition to sending its public key. The client then sends the public key of its client certificate back to the server - and then the server decides whether it should trust the client certificate.
If the client certificate is not trusted, it is rejected. The rules for validating a client certificate are up to the server and do not necessarily follow the same validation rules as a server certificate on the client. In the case of xConnect:
There are a lot of things that can go wrong with the client certificate, moreso than the server certificate. When troubleshooting, make your first step the Sitecore XP logs - they generally have some basic information about a bad client cert.
Chances are your client certificate validation failed. This could mean:
App_Config\ConnectionStrings.config
is missing or incorrect. Note that the thumbprint must be all uppercase with no spaces or colons. If copied from certificate manager, an unprintable character might prefix the thumbprint - check for a hidden character there.App_Config\ConnectionStrings.config
is incorrect. Normally the certificate should be stored in local machine certificates and have a connection string similar to StoreName=My;StoreLocation=LocalMachine;FindType=FindByThumbprint;FindValue=THUMBPRINTVALUE
.This indicates one of two things:
App_Config\ConnectionStrings.config
file. Note that the thumbprint must be all uppercase with no spaces or colons. If copied from certificate manager, an unprintable character might prefix the thumbprint - check for a hidden character there.App_Config\ConnectionStrings.config
is incorrect. Normally the certificate should be stored in local machine certificates and have a connection string similar to StoreName=My;StoreLocation=LocalMachine;FindType=FindByThumbprint;FindValue=THUMBPRINTVALUE
.As long as the server certificate is valid, this message is most likely that the Sitecore XP server’s IIS app pool user account does not have read access to the client certificate’s private key. This access is needed so that the Sitecore XP server can encrypt communications using its client certificate.
To remedy this issue, open the local machine certificates (“Manage computer certificates” in a start menu search) on the Sitecore XP server. Find the client certificate (normally under Personal\Certificates
). Right click it, choose All Tasks
, then Manage Private Keys...
. You should get a security assignment window like this:
Next, add your IIS app pool user to the ACLs and grant it Read
permissions (as above). Remember if you’re using AppPoolIdentity (you should be, unless using a domain account for windows auth to SQL), you must select the account by choosing Local Computer
as the search location, and enter IIS APPPOOL\MyAppPoolsName
as the account name to find.
Still having issues? Well, you can also use the security audit log to find out which account is failing to get access, then add that account in the key ACLs above:
If you work at a Sitecore partner and will have multiple copies of Sitecore 9 running locally, this can cause issues if you issue a dedicated SSL server certificate to each site. This is because a given TCP port (e.g. 443, the default) can only have one SSL certificate bound to it. This precludes having multiple Sitecore 9 instances running together locally unless they share the same SSL certificate.
Wildcard certificates are perfect for this job. As long as you use the same top level suffix for all your dev sites (e.g. *.local.dev
), you can use the same wildcard certificate for your server certificate for all dev sites. Note that IIS’ self-signed certificate generator will not create a wildcard certificate for you. You’ll have to use something else, like New-SelfSignedCertificate, to create one.
Important note: You must issue a wildcard for at least two segments of domain for it to be trusted. For example *.dev
is bad, but *.local.dev
is good.
Note that client certificates should be unique on each site, only the server certificate should be shared.
In the release version of Sitecore 9, you can also disable the requirement to use encryption with xConnect which can bypass a lot of debugging. Do not do this in production or else a herd of elephants will destroy you.
It’s possible to watch the SSL negotiation at a TCP/IP level using a network monitor such as Wireshark. This can provide insights on why your setup is failing when error messages are less than optimal. For example I spent a couple days diagnosing what turned out to be private key security issues. I figured this out by using Wireshark and observing that the client was never sending its client certificate after the server requested it, and figuring out why that was.
To use Wireshark to watch SSL traffic, you’ll have to set it up to decrypt traffic. This guide walks you through setting up decryption on Windows with an exported private key.
If you’re tracing local dev traffic (e.g. from localhost
to localhost
, including using your machine’s DNS name) Wireshark will not capture that unless you install npcap instead of the default pcap
packet capture software. Once npcap
is installed, tell Wireshark to bind to the Npcap Loopback Adapter
to see local traffic.
Here is a screenshot of the Wireshark capture where I diagnosed the client certificate security issue:
The land of Sitecore documentation is becoming a bit crowded these days. While at Symposium, I heard some people say they didn’t know how to keep up on new documentation - so here’s what I know. No doubt I missed some resources too, but these are the ones I usually use and follow.
This is the main place to find documentation for Sitecore, as well as Sitecore modules. It has a handy RSS feed of updated articles you should subscribe to.
Unfortunately the RSS feed is not entirely complete due to documentation microsites being proxied in under the main doc site (for example Commerce and the v9 Scaling Guide). These statically generated sites generally do not provide their own RSS feeds, and are thus harder to track updates to.
The Sitecore KB lists known issues, support resolutions, security bulletins, and other support information. Like the main doc site, it has its own RSS feed of updated articles that is absolutely worth subscribing to.
Sitecore’s official architecture guidance has its own website. Unfortunately, no RSS feed of updates.
The JavaScript Services module has its own separate documentation site. Unfortunately, no RSS feed of updates.
Where to go to actually download Sitecore releases and official modules such as SXA and PXM. There’s no RSS feed of new releases and updates, unfortunately.
A Sitecore-run blog aggregator that serves up a fresh helping of most major Sitecore blogs. Worth subscribing to via its RSS feed.
A community-driven Q&A site that’s part of StackExchange. If you have a question about Sitecore, there are many highly active members who are happy to help here.
Slack is a group messaging/discussion tool. The Sitecore Community Slack group has over 2,700 Sitecore developers with very active participation. If you do Sitecore, you should be here.
Community run unofficial training videos that cover development practices that are commonly used, but not covered in official Sitecore training. More opinionated, influenced heavily by real-world implementation experiences.
Unofficial documentation. Not updated that often any longer but still some good information, especially the article on config patching.
The SPE documentation is so complete that it’s worth mentioning even though it’s for a single Sitecore module.
]]>I’m happy to announce the final release of Unicorn 4.0! Unicorn 4 comes with significant performance and developer experience improvements, along with bug fixes. Unicorn 4 is available from NuGet or GitHub.
Unicorn 4 is faster - a lot faster. Check out these benchmarks:
The speed increase is due to optimized caching routines, as well as the Dilithium batch processors. Dilithium is an optional feature that is off by default: because of its newness, it’s still experimental. I’m using it in production though. Give it a try - it can always be turned off without hurting anything.
For more detail into how Unicorn 4 is faster, and what Dilithium does, check out this detailed blog post.
Unicorn 4 features a refactored configuration system that is designed to support Sitecore Helix projects with an improved configuration experience. The new config system is completely backwards-compatible, but now enables configuration inheritance, configuration variables, and configuration extension so that modular projects can encode their conventions (e.g. paths to include, physical paths) into one base config and all the module configs can extend it.
This drastically reduces the verbosity of the module configurations, and improves their maintainability by allowing conventions to be DRY. Here’s a very simple example of a base conventions configuration:
1 | <configuration name="Habitat.Feature.Base" abstract="true"> |
And here’s a module configuration that extends it:
1 | <configuration |
There’s a lot more that you can do with the configuration enhancements in Unicorn 4 too. For additional details, read this extensive blog post.
Just about anything you can do with Unicorn can now be automated using Sitecore PowerShell Extensions in Unicorn 4. You can now run Unicorn SPE cmdlets to…
The Unicorn console has received a serious upgrade in Unicorn 4. If you’ve ever run a sync that changed a large number of items from the Unicorn Control Panel, you may have noticed the browser slow to a crawl and the sync seem to almost stop. The console that underpins Unicorn 3 and earlier started to choke at around 500 lines.
No longer! Unicorn 4’s upgraded console has spit out 100,000 lines without a hitch, and it should scale beyond that.
The automated tool console (PowerShell API) has also received an upgrade. Previously the tool console buffered all the output of a sync before sending it back. This caused problems in certain environments, namely Azure, where TCP connections that don’t send any data for more than four minutes are terminated. This would cause long-running syncs in Azure to die unexpectedly.
In Unicorn 4 the automated tool console emits data in a stream just like the control panel console. There’s also a heartbeat timer where if no new console entries are made for 30 seconds, a .
will be sent to make sure the connection is kept active.
The streaming tool console also requires updating your Unicorn.psm1
file - not only will you get defense against TCP timeouts, you’ll also be able to see the sync occur in real time using the PSAPI just like you would from the control panel. No more waiting until it’s done to see how things are going :)
Unicorn 4 can now exclude items from a configuration by template ID, thanks to Alan Płócieniak. See also Alan’s original post on the technique.
1 | <include name="Template ID" database="master" path="/sitecore/allowed"> |
You can also exclude items by a regular expression of their name. This enables scenarios such as wanting to include all templates, but exclude all __Standard values
items.
1 | <include name="Name pattern" database="master" path="/sitecore/namepattern"> |
The complete grammar for predicates is always in the predicate test config.
Unicorn 4’s breaking changes do not break any common use-cases of Unicorn, but review them to see if they affect you.
__Originator
field is now serialized by default. This enables proper tracking of the origin of items instantiated from branch templates.UseLegacyAttributeFormatting
(formats items in Unicorn < 3.1 format) setting has been removed. The new format is now always used. This has always been off by default.FieldComparer
s are no longer activated using the Sitecore Factory, so they only support parameterless constructors (this would only affect custom comparers; the stock ones have always been parameterless)Unicorn.psm1
is required if you are using Unicorn’s PowerShell API. This file also now ships in the NuGet package, so you can be sure you’re getting the right version for your Unicorn.b
If you’re coming from classical Unicorn 3.1 or later, upgrading is actually really simple: just upgrade your NuGet package. Unicorn 4 changes nothing about storage or formatting (except that the __Originator
field is no longer ignored by default), so all existing serialized items are compatible.
If you’re invoking Unicorn via its remote PowerShell API, make sure to upgrade your Unicorn.psm1
to the Unicorn 4 version to ensure correct error handling with the streaming console.
Thank you to the community members who contributed code and bug reports to this release.
Today we’re going to discuss how to syntactically improve the declaration of a contact facet class using syntaxes available in C# 6.0 (VS 2015) and C# 7.0 (VS 2017). It’s important to note that the C# version is decoupled from the .NET framework version: the C# 7.0 compiler is perfectly capable of emitting C# 7 syntax to a .NET 4.5-targeted assembly, for instance. So you can use these modern language features as long as you’ve got the right version of MSBuild or Visual Studio :)
Here’s the example Pete uses in his post, which follows other examples out there as well:
1 | using System; |
As you can see, the facet API requires string keys for the facet values - in this case stored as const string
- to get and set them. Further, as Pete notes:
I found out the hard way that the constants defined, the value must equal the actual name of the class property for the same attribute.
Well in C# 6 (VS 2015), there’s a syntax for that. The nameof
statement allows you to get the string name of a variable or property. This essentially hands off the management and maintenance of the const
value to the compiler, instead of the developer.
So we can clean up this example by using nameof
instead of constants - and get as a bonus refactoring support and compile-time validation of the names:
1 | using System; |
Finally if you have C# 7.0 (VS 2017), you can also utilize expression bodied members to further clean up the property syntax:
1 | using System; |
So there - now go forth and put your data in the xDB :)
]]>Now that that’s out of the way, let’s talk about another new Unicorn 4 feature: modular architecture friendly configurations.
When Habitat first launched, I was mildly incredulous at the amount of duplication in its Unicorn configurations. Tons of tiny modules, all of which shared similar but not identical configurations (such as custom root folders) was not really a consideration when multiple configurations were originally conceived. Fast forward to today, and that’s a major use case that is more difficult than it needs to be.
Here’s an example of a Habitat Unicorn configuration:
1 | <configuration xmlns:patch="http://www.sitecore.net/xmlconfig/"> |
It’s long and it has a ton of boilerplate that is either identical in every module, or else defined by system conventions (e.g. physicalRootPath). We don’t need to be that verbose when using Unicorn 4. When we setup a modular, convention-based system using Unicorn 4 we can start by using abstract configurations to define the conventions of our system:
1 | <configuration name="Habitat.Feature.Base" abstract="true"> |
This configuration defines a configuration that other configurations can extend. Because of its abstract
-ness it is not a Unicorn configuration itself, only a template. Non-abstract configurations may also be extended.
This abstract configuration is also making use of Unicorn 4’s ability to do variable replacement in configurations. The $(layer)
and $(module)
variables are expanded in the extending configuration and are based on the convention of naming your configurations Layer.Module
. You can also expand more than one config per module and use your own variables. Using our abstract Habitat.Feature.Base
configuration above, the same Feature.News
configuration we started with can now be expressed much more simply:
1 | <configuration |
Nice huh? But what if you want to extend or replace a dependency in the inherited configuration? You can do that, too - and using Unicorn 4’s element inheritance system you can also do it very cleanly. Unicorn configurations have always been architecturally a set of independent IoC containers. The <defaults>
node in Unicorn.config
sets up the defaults, and then each configuration’s nodes override and replace the defaults if they exist. This is how you can deploy only new items with the NewItemsOnlyEvaluator
- you’re replacing the default evaluator with a different dependency implementation.
Unicorn 4 takes this a step further: with config inheritance, dependencies can be partially extended at an element level. You might have noticed this already in the Habitat.Feature.Base
configuration, when we did this:
1 | <targetDataStore physicalRootPath="$(sourceFolder)\$(layer)\$(module)\serialization" /> |
In Unicorn 3, this would have required a type
attribute. In Unicorn 4, unless you specify a type
attribute, any attributes you add either replace or add to the default (or inherited) implementation. So instead this kept the same default dependency definition and changed an attribute on it - the physicalRootPath
.
If you do specify a type
, nothing is inherited and it works like Unicorn 3. Thus existing configurations will also work without modification :)
But what about things that have more than just attributes, like the predicate
‘s include
nodes? You can append elements in the inherited configuration in that case. If we take our Habitat.Feature.Base
configuration above and extend it like this:
1 | <predicate> |
The end result is effectively:
1 | <predicate type="Unicorn.Predicates.SerializationPresetPredicate, Unicorn" singleInstance="true"> |
You cannot remove inherited predicate nodes (or other dependencies that use children like fieldFilter
), so plan accordingly: adding elements only.
And there you have it: with Unicorn 4 you can reasonably simply create serialization conventions for your modules and avoid configuration duplication - or if you’re not ready to go modular, you can at least enjoy not needing to have a type
on most configuration nodes.
The Unicorn console has also received a serious upgrade in Unicorn 4. If you’ve ever run a sync that changed a large number of items from the Unicorn Control Panel, you may have noticed the browser slow to a crawl and the sync seem to almost stop. The console that underpins Unicorn 3 and earlier started to choke at around 500 lines.
No longer! Unicorn 4’s console has spit out 100,000 lines without a hitch.
The automated tool console has also received an upgrade. Previously the tool console buffered all the output of a sync before sending it back. This caused problems in certain environments, namely Azure, where TCP connections that don’t send any data for more than 4 minutes are terminated. This would cause any long-running syncs in Azure to die unexpectedly.
In Unicorn 4 the automated tool console emits data in a stream just like the control panel console. There’s also a heartbeat timer where if no new console entries are made for 30 seconds, a .
will be sent to make sure the connection is kept active.
The streaming console also requires updating your Unicorn.psm1
file - not only will you get defense against TCP timeouts, you’ll also be able to see the sync occur in real-time using the PSAPI just like you would from the control panel. No more waiting until it’s done to see how things are going :)
Absolutely. You can find Unicorn 4.0.0-pre03 on NuGet right now!
More stable than you might think. Unicorn 4 is largely additions, fixes, and enhancements to the already stable codebase behind Unicorn 3. The core pieces have not changed very much, unless you enable Dilithium and that’s optional. The new config inheritance stuff has 97% code coverage. That’s not to say it’s bug free either. If you find bugs let me know and I’ll fix them :)
Installation is just like Unicorn 3: Install the Unicorn
NuGet package, and follow the directions in the README that will launch on installation to set up configuration(s).
NOTE: Dilithium ships disabled by default. If you want to enable it, make a copy of Unicorn.Dilithium.config.example
and enable it.
If you’re coming from classical Unicorn 3.1 or later, upgrading is actually really simple: just upgrade your NuGet package. Unicorn 4 changes nothing about storage or formatting (except that the __Originator
field is no longer ignored by default), so all existing serialized items are compatible.
Taking advantage of the config enhancements detailed above is also entirely optional: Unicorn 3 configurations are totally readable by Unicorn 4.
If you’re invoking Unicorn via its PowerShell API, make sure to upgrade your Unicorn.psm1
to the Unicorn 4 version to ensure correct error handling with the streaming console.
Have fun!
]]>Over time, many people have asked if there was a way to generate Sitecore packages from Unicorn. The answer has always been no, for many good reasons: packages install slowly, cannot ignore specific fields, or process advanced exclusions like a Unicorn predicate can. This makes them much less safe (and much slower) for deployment purposes compared to a remotely invoked Sync using deployed serialized items.
But there is a great use case for generating packages from Unicorn: authoring modules. As a module author, a method is needed to track the items that belong to your module and also to reliably create Sitecore packages for distribution of your module which contain those items. Unicorn is a natural fit for simply tracking module items, but it has lacked the ability to automatically push updates to release packages like it can to serialized items. This unnecessarily complicates things and reduces release reliability. That’s bad.
So when Michael West and Adam Najmanowicz, the authors of Sitecore PowerShell Extensions, asked if there was a way we could export Unicorn configurations to packages my answer was absolutely.
SPE has long had packaging support built into it, and in fact SPE’s release packages are built using SPE. Unicorn packaging support is also implemented through SPE, and here’s how it works:
1 | # Create a new Sitecore Package (SPE cmdlet) |
And when you’re done with that, c:\foo.zip
would contain a package that when installed will contain the entire contents of any Unicorn configuration matching Foundation.*
.
New-UnicornItemSource
also accepts parameters to specify package installation options, exactly like SPE’s New-ExplicitItemSource. This cmdlet is also very similar to how New-UnicornItemSource
works: each item that is included in the configuration is added to the package as an explicit item source. Doing this also means that the exported package completely respects the Unicorn Predicate, including exclusions of child paths (note that if you specify -InstallMode Overwrite
, excluded children may be deleted by the package).
No, they are pulled from the Sitecore database because the Sitecore packaging APIs work in Item
s. So make sure to sync before you generate a package. Unless you’re using Transparent Sync in which case the items will already be up to date.
No. As mentioned above, packages are a slower and more dangerous method to deploy item updates to your site.
No. Unicorn would only be used in the development of the module, and the build process used to generate plain old Sitecore Packages for module releases. The module itself would need depend on neither Unicorn, Rainbow, or SPE.
]]>That’s not to say you’re not allowed to comment, because you can tweet your comments or join Sitecore Community Slack and comment all day.
In other news the blog is now fully SSL enabled, courtesy of CloudFlare.
Happy hacking!
]]>To use Unicorn cmdlets in SPE, all that is necessary is to install the SPE package along with Unicorn. Unicorn 4 comes with configuration that remains quiescent until SPE is installed that will automatically enable the Unicorn cmdlets. In case that’s not clear enough: SPE is an optional addition and will not be required to use Unicorn 4.
So what can we do with Unicorn cmdlets for SPE?
1 | # Get one |
The result of Get-UnicornConfiguration
is an array of IConfiguration
objects, which you can spelunk (e.g. with their Name
property) or pass to other cmdlets. Configurations are read only.
Sync cmdlets make use of Write-Progress to provide a similar progress bar experience to the Control Panel, albeit a bit less responsive.
1 | # Sync one |
For example:
Sometimes you want to only sync a portion of a configuration. You can do that with PowerShell using Sync-UnicornItem
.
1 | # Sync a single item (note: must be under Unicorn control) |
The cmdlet to reserialize is called Export-UnicornConfiguration
because Reserialize
is not an approved verb for a cmdlet :)
1 | # Reserialize one |
Sometimes you want to only reserialize a portion of a configuration. You can do that with PowerShell using Export-UnicornItem
.
1 | # Reserialize a single item (note: must be under Unicorn control) |
You can also dump out the raw YAML for an item - or items. The output of ConvertTo-RainbowYaml
is either a string or array of strings depending on how many items were passed to it. Note that unless -Raw
is specified, the default field formatters and excluded fields Unicorn ships with are used. These are non-customizable and do not follow Unicorn defaults if changed.
This capability enables casual use of YAML serialization without having to use Unicorn or set up a configuration. It’s not a good solution for general purpose synchronization though simply because the nuances of storing trees of items in files are many. Very many. But I’m curious what uses people will find for this :)
1 | # Convert an item to YAML format (always uses default excludes and field formatters) |
In Rainbow the IItemData
interface is the internal unit of an Item. The ConvertFrom-RainbowYaml
cmdlet converts raw YAML string(s) into IItemData
which you can then spelunk as objects or deserialize as needed.
1 | # Get IItemDatas from YAML variable |
To deserialize items, use Import-RainbowItem
which takes IItemData
items in and deserializes them into the Sitecore database. No comparison is done before deserialization, which makes this a bit slower than a full Unicorn sync.
As a shorthand Import-RainbowItem
also accepts YAML strings, however as IItemData
can represent any sort of item, it is not limited only to deserializing YAML-sourced items.
1 | # Deserialize IItemDatas from ConvertFrom-RainbowYaml |
No. The existing PowerShell API uses Windows PowerShell to provide remote syncing capability and does not require installing Sitecore PowerShell. They serve different parallel purposes, and both are here to stay.
I’ll be releasing a beta once I finish the features I have planned. Yep, there’s at least one more ;)
]]>