Kam's Idea Log C#, Jamstack, and Shenanigans 2021-07-27T00:35:41.348Z https://kamsar.net/ Kam Figy Hexo Fun with async generators and for await https://kamsar.net/index.php/2021/07/Fun-with-async-generators-and-for-await/ 2021-07-26T23:30:51.000Z 2021-07-27T00:35:41.348Z Async generators (and their close friend for await) are a really cool feature of modern JavaScript that you can use when a loop requires asynchronous iteration. What the heck is that? Well let’s look at a regular for...of loop:

1
2
3
4
5
6
7
8
const array = [1, 2, 3];
for (const el of array) {
console.log(el);
}

// 1
// 2
// 3

Nothing weird here, we’re looping over an array and printing each element. An array is a synchronous data structure, so we can loop over it very simply. But what about asynchronous data, like say the fetch API to get data from a HTTP endpoint. A simple implementation of looping over that data is not much more complex:

1
2
3
4
5
6
7
8
9
const res = await fetch("https://whatever.com/api");

if (res.ok) {
const data = await res.json();

for (const el of data) {
console.log(el);
}
}

We’re still using for...of and being synchronous in our loop so let’s add one more factor: we’re using an API that uses pagination of some sort, which is pretty much every API ever made since it’s expensive to load giant datasets. A simple pagination-based iteration might look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
let data = [];
const pageSize = 10;
let page = 1;

do {
const res = await fetch(
`https://whatever.com/api?pageSize=${pageSize}&page=${page}`
);

if (!res.ok) {
throw new Error("Badly written error message");
}

const data = await res.json();

for (const el of data) {
console.log(el);
}

page++;
} while (data.length === pageSize);

This is sort of async iteration but it suffers from a major shortcoming: you have to comingle the iteration code and the data fetching in the same loop as each page is ephemeral - or else select all the pages and allocate a monster array before you can deal with the data. Neither of those are great options, so let’s try an async generator function instead. An async generator is a function that returns an async iterable that you can loop over. Like other iterable and enumerable types (such as IEnumerable in C#), async iterables in JS are essentially an object with a next function to get the next thing in the iterable. This has several interesting side effects:

  • You cannot have random access to an iterable (it is forward-only)
  • Because it is forward only, memory need not be allocated for the entire iterable at the same time so it’s easily possible to have an iterable that has a nearly unlimited of iterations without running out of memory.
  • If you stop iterating, the rest of the elements in the iterable are never evaluated. For our pagination example, this means that if we stop reading at page 2, but page 3 exists, we’ll never fetch page 3 from the server.

For my own sanity I’m going to drop into TypeScript here to illustrate the types that we’re passing around :)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
// This paginateAsync function implements generic pagination functionality over an arbitrary API,
// using a passed in fetchPage function to tell it how to fetch a new page of data.
// the * means the function is a generator (which returns an iterator)
// and the async means it's an async generator, not a sync generator
// (for C# readers, this is really similar to a method returning IEnumerable and using `yield return`)
async function* paginateAsync<TResult>(
fetchPage: (offset: number, limit: number) => Promise<TResult[]>,
pageSize: number
): AsyncIterableIterator<TResult> {
let offset = 0;
let pageData: TResult[] = [];

do {
// fetch one page of data
pageData = await fetchPage(offset, pageSize);

// yield each item in the page (lets the async iterator move through one page of data)
for (const item of pageData) {
yield item;
}

// increase the offset by the page count so the next iteration fetches fresh data
offset += pageSize;
} while (pageData.length === pageSize);
}

// to use this function, we call it (note the lack of `await`; we're starting the iterator but requests don't occur until we loop over it)
// then using the resulting iterator we can iterate over it using the `for await` loop:
const iterator = paginateAsync(
(offset, limit) =>
fetch(`https://whatever.com/api?limit=${limit}&offset=${offset}`),
100
);

for await (const item of iterator) {
// every time 100 iterations are done here,
// a new API call will be sent for the next page.
// this is transparent to your iteration.
console.log(item);

// unlike returning a huge array from a single await,
// this for await construct only ever has 100 items in memory at a time,
// so you can tune your batch sizes.

// you can also abort the iteration before it's complete
// for example if I break the loop after 199 items no request
// for page 3 will ever be made.
// if(index === 199) break;
}

So now you can call a paginated API and treat the result as if it were a non-paginated loop. Pretty neat, right? Even better, you can use this in all modern browsers. (IE ain’t modern, folks…)

Again for my C# readers: this is pretty similar conceptually to C#’s IAsyncEnumerable construct and can be used in similar circumstances.

References:

]]>
<p>Async generators (and their close friend <code>for await</code>) are a really cool feature of modern JavaScript that you can use when a l
Deploying multiple Netlify sites from one monorepo https://kamsar.net/index.php/2020/08/Deploying-multiple-Netlify-sites-from-one-Git-repository/ 2020-08-21T03:30:32.000Z 2021-07-26T23:19:02.518Z Netlify is an incredibly easy to use and powerful host for static sites. While this blog is not hosted there, I did deploy it to Netlify, no joke in less than 5 minutes. With automatic deployment of updates on commit. Check em out.

Anyhow today I had a monorepo that has more than one site that I wanted to deploy to Netlify. The repo looked something like this:

1
2
3
4
MyRepo
└───sites
├───charlie.com
└───nick.org

Netlify supports this but it’s not super well documented how to accomplish it without a little sleuthing.

Monorepo support works by setting the Base directory of each site’s configuration to point at the relative path to your site root:

As it says on the tin, this essentially sets the path to cd to before starting the build commands. (Find it here: https://app.netlify.com/sites/<your-site-name-here>/settings/deploys)

But there’s one cool thing the description forgets to mention: You can also configure Netlify sites using a netlify.toml file, making your configuration versioned in Git. This gets really useful to control the whole build stack from one place: configuring the build commands, redirects, setting up lambda functions. Netlify usually expects netlify.toml in the root of the repository. However, the Base directory setting also changes where the netlify.toml is expected to live. If we do this:

1
2
3
4
5
6
MyRepo
└───sites
├───charlie.com
│ └───netlify.toml
└───nick.org
└───netlify.toml

…then we get the lovely capability to both monorepo our sites, and also version our Netlify configuration. Awesome!

]]>
<p><a href="https://www.netlify.com/">Netlify</a> is an incredibly easy to use and powerful host for static sites. While this blog is not ho
Running JSS headless mode in containers part 2: Build containers https://kamsar.net/index.php/2019/09/Running-JSS-headless-mode-in-containers-part-2-Build-containers/ 2019-09-16T02:14:04.000Z 2021-07-26T23:19:02.517Z In part 1 of this series, we investigated containerizing the JSS headless mode host. But there are some issues with that basic implementation:

  • We still have to pre-compile the JSS app to extract the server bundle, before building the container, and this compile step is not necessarily consistent with the production container environment since it depends on the host environment (different Node versions, OSes, etc are possible)
  • The standard Node container is pretty large (900MB), and though there are reasonable reasons for this, something smaller would be nice for production
  • Because the API key and Sitecore hostname are baked into the server bundle at build time, it would be necessary to maintain a container image for each environment

So, let’s fix this by using a build container to create a standardized build environment where we can make our server bundle. Basically,

Build Containers 101

When a Dockerfile executes, the state of each intermediate step is stored so that it need not be repeated every time the image rebuilds. But more importantly, you can also create intermediate build containers that live only long enough to perform a build step, then get thrown away except for some artifacts you want to persist on to some future part of the container build. In the case of our JSS image, the idea is something like this:

  • Create a full Node container
  • Copy the JSS app into it and run jss build
  • Create a lightweight Node container
  • Deploy the artifacts from the full Node container and the JSS headless app into it

The build container that we use to build the JSS app in is thrown away after the build occurs, leaving only the lightweight production container with its artifacts. In this case, thanks to switching from node:lts to node:lts-alpine as the base container, the built container size shrunk from 921MB to 93MB.

Note that because the base image is stored as a diff, the image size reduction affects the initial download time of the image on a new host, but once the node:lts image is cached it really only changes the amount of static disk space consumed.

Adding a build container step involves adding a few lines to the top of the Dockerfile from part 1:

# Create the build container (note aliasing it as 'build' so we can get artifacts later)FROM node:lts as build# Install the JSS CLIRUN npm install -g @sitecore-jss/sitecore-jss-cli# Set a working directory in the container, and copy our app's source files thereWORKDIR /buildCOPY jss-angular /build# Install the app's dependenciesRUN npm install# Run the buildRUN npm run build# Now, we need to switch contexts into the final container# lts-alpine is a lightweight Node container, only 90MB# When we switch contexts, the build container is supplanted# as the context containerFROM node:lts-alpine# ...# When we copy the app's source into the final container# we need to use --from=[tag] to get the files from our build container# instead of the local diskCOPY --from=build /build/dist /jss/dist/${SITECORE_APP_NAME}

Tokenizing the Server Bundle

To solve the issue of the Sitecore API URL and API Key being baked into the server and browser bundles by webpack during jss build, we need to use tokenization. These values really do need to be baked into the file at some point, because the browser that executes them does not understand environment variables on your server or how to replace them - but, we should not need to re-run webpack every time a container starts up either.

We can work around this by baking specific, well-known tokens into the bundle files and then expanding those tokens when the container starts from environment variable values. The approach works something like this:

  • When the build container builds the JSS app, we force it to use specific well-known token values for the API host and API key, such as %sitecoreApiHost%
  • We move all the build output from the build container from *.js files to *.base files. This means the container itself does not contain any JS in its /dist. This is necessary so the container can generate the final files each time it starts up. (Since the same image can start many times with different environment variables present, it has to ‘rebake’ the JS each time)

Doing this is a bit harder than just doing the build container. First, during the container build in the Dockerfile:

# Before the build container runs the `build` command,# we need to set specific API key and host values to bake# into the build artifacts to replace later.RUN jss setup --layoutServiceHost %layoutServiceHost% --apiKey 309ec3e9-b911-4a0b-aa8d-425045b6dcbd --nonInteractiveRUN npm run build# After the build container runs the `build` command,# we need to move all the .js files it emitted to .base filesRUN find dist/ -name '*.js' | xargs -I % mv % %.base

With the updated Dockerfile in place, the container we build will now have .base files ready to specialize into the running configuration when the container starts up. But without any changes to the image itself, it would fail because we can’t run an app using .base files! So we need to add a little script to the node-headless-ssr-proxy to perform this specialization when it starts up inside a container. The specialization process:

  • Copy all .base files to .js file of the same name (make a runtime copy to use in the browser)
  • Search & replace the well-known tokens in the .js files with the current runtime environment variables
  • Start the JSS headless proxy, which will now use the generated .js files and run normally

I used bootstrap.sh for the script name, but any name is fine.

1
find dist/ -name '*.base' | while read filename; do export jsname=$(echo $filename | sed -e 's|.base||'); cp $filename $jsname; sed -i -e "s|%layoutServiceHost%|$SITECORE_API_HOST|g" -e "s|309ec3e9-b911-4a0b-aa8d-425045b6dcbd|$SITECORE_API_KEY|g" $jsname; done

This script is a rather hard to read one-liner so let’s piece it out to understand it:

  • find dist/ -name '*.base' | while read filename - Finds *.base files anywhere under dist, and reads each found filename into $filename in a loop body
  • do export jsname=$(echo $filename | sed -e 's|.base||') - Sets $jsname to the name of the found file, with the extension changed from .base to .js
  • cp $filename $jsname - copies the .base file to the equivalent path, but using the .js extension instead
  • sed -i -e "s|%layoutServiceHost%|$SITECORE_API_HOST|g" -e "s|309ec3e9-b911-4a0b-aa8d-425045b6dcbd|$SITECORE_API_KEY|g" $jsname - uses sed to perform a regex replace on the known values we baked into the base file, replacing them with the environment variables ($SITECORE_API_HOST and $SITECORE_API_KEY) that form the current runtime configuration for those values

Finally we need to get this script to run each time the container starts up. There are several ways we could do this, but I elected to add an npm script to the headless proxy’s package.json:

1
2
3
4
"scripts": {
"start": "node index.js",
"docker": "./bootstrap.sh && node index.js"
},

…and then changed the entry point in the Dockerfile to call the container entrypoint:

ENTRYPOINT npm run docker

The final step is to rebuild the container image so we can start it up, using docker build.

Using the tokenized container

The headless proxy Node app has always known how to read environment variables for the Sitecore API host and API key, but those have only applied to the SSR execution not the browser-side execution. With the modifications we’ve made, setting those same environment variables will now also apply to the browser. Doing this with Docker is quite trivial when booting the container, for example:

1
docker run -p 3000:3000 --env SITECORE_API_KEY=[yourkey] --env SITECORE_API_HOST=http://your.site.core [container-image-name]

Putting it together

For more clarity, here’s the full contents of the Dockerfile with all these changes made:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
FROM node:lts as build
RUN npm install -g @sitecore-jss/sitecore-jss-cli
WORKDIR /build
COPY jss-angular /build
RUN npm install
# setup to use static values we'll later replace with env vars
# (for values that are baked into the server bundle)
RUN jss setup --layoutServiceHost %layoutServiceHost% --apiKey 309ec3e9-b911-4a0b-aa8d-425045b6dcbd --nonInteractive
RUN npm run build
# Rename all .js files to .js.base (so we can bootstrap tokens later)
RUN find dist/ -name '*.js' | xargs -I % mv % %.base

FROM node:lts-alpine

ENV SITECORE_API_HOST=http://host.docker.internal
ENV SITECORE_API_KEY=MYKEY
ENV SITECORE_APP_NAME=jss-angular
ENV SITECORE_JSS_SERVER_BUNDLE=./dist/${SITECORE_APP_NAME}/server.bundle.js
ENV SITECORE_ENABLE_DEBUG=false
ENV PORT=3000

WORKDIR /jss
COPY ./node-headless-ssr-proxy /jss
COPY --from=build /build/dist /jss/dist/${SITECORE_APP_NAME}
RUN npm install

ENTRYPOINT npm run docker
EXPOSE ${PORT}

In this episode, we have improved the JSS headless container build process by running all of the build inside containers for improved repeatability and tokenized the browser JS bundles so that the same container can be deployed to many environments with different API hosts without needing a rebuild. What’s next? Orchestrating multiple instances with Kubernetes.

]]>
<p>In <a href="/index.php/2019/09/Running-JSS-headless-mode-in-containers-part-1/">part 1</a> of this series, we investigated containerizing
Running JSS headless mode in containers, part 1 https://kamsar.net/index.php/2019/09/Running-JSS-headless-mode-in-containers-part-1/ 2019-09-08T19:00:36.000Z 2021-07-26T23:19:02.516Z I’ve been playing with containers lately and as an experiment, containerized JSS Headless Mode. Since I had fun doing this, I figured I’d share what I learned. Note that this is my own explorations, and should not be construed as any official statement of container support for JSS, nor is it supported via official Sitecore channels.

Containers 101: What’s a container?

The best way to understand containers quickly is, of course, a meme.

Another way to think of a container is a lightweight virtual machine. Unlike a VM, a container shares much of its system with the host OS or node. This means that containers:

  1. Are much smaller both in disk and memory usage compared to a VM
  2. Do not provide as strong of an isolation from the host as a VM
  3. Are more easily based on a standard distribution. For example in this post we won’t be building a container from scratch; we will take the standard node container and deploy JSS to it - thus, we offload the maintenance of the base container to the Node maintainers, and we take on the maintenance of only our app.

Containers have become incredibly popular as a way to build and deploy applications because of their consistency and low resource usage. Especially as more applications take on more server-based dependencies (i.e. microservice architectures, or even a traditional app that may need a database, search service, etc), containers provide a reasonable way to replicate such a complex IT infrastructure on a developer machine in the same way that it runs in production - without each developer needing to have a 1TB RAM, 28-core server to run all those virtual machines.

So with that in mind, what if we wanted to containerize Sitecore JSS’ headless mode host?

Note: we’re only containerizing the JSS SSR host in this post; the rest of the Sitecore infrastrucure would still need to be deployed traditionally.

Creating a JSS Docker container

If you’re planning to follow along at home with this build, note that you’ll need to install Docker Desktop in order to be able to locally build and run the containers. You may also need to enable virtualization in your UEFI, if it’s off, or potentially for Windows also enable Hyper-V and Containers features at an OS level. Consult the Docker docs for help with that :)

When you create a container, there are three main tasks:

Determine the base container to build from

Containers are built on top of other containers in an efficient and lightweight way. This means that for example, your container might start with a Windows Server container, or an Ubuntu container…or it might start from a Node container, that was based on an Debian container. You get the idea - containers, like ogres or ‘90s software architecture, have layers. Each layer is built as a diff from the underlying layer. When you make a container, you’re adding a layer.

In our case, JSS headless SSR is a Node-based application, so we will choose the Node container as our base.

Define the Dockerfile

The dockerfile is a file named Dockerfile that defines how to create your container. It defines things like:

  • What your base container is (FROM node:lts)
  • How to modify the base container to turn it into your container (scripts and file copying)
  • Defaults, like which TCP/UDP ports the container can expose

In our case we want to start from the node container:

FROM node:lts

Then we want to tell Docker how to deploy our JSS app on top of the Node container. We do this by telling it which files we want to copy into the container image and where to put them, as well as any commands that need to be run to complete the setup:

# We want to place our app at /jss on the container filesystem# (this is a fairly arbitrary choice; # use something app-specific and don't use '/')# Subsequent commands and copies are relative to this directory.WORKDIR /jss# Specify the _local_ files to copy into the container;# in this case a copy of the headless SSR proxy: https://github.com/Sitecore/jss/tree/dev/samples/node-headless-ssr-proxyCOPY ./node-headless-ssr-proxy /jss# Run shell commands _inside the container_ to set up the app;# in this case, to install npm packages for the headless Node app.# NOTE: the container is built on the Docker server, not locally!# Commands you run here run inside the container, and thus # cannot for example reference local file paths!RUN npm install# To run JSS in headless mode, we also need to deploy # the JSS app's server build artifacts into the container# for the headless mode proxy to execute. This is another copy.COPY my-jss-app-name/dist /jss/dist/my-jss-app-name# When the container starts, we have to make it do something# aside from start - in this case, start the JSS app.# The command is run in the context of the WORKDIR we set earlier.ENTRYPOINT npm run start# The JSS headless proxy is configured using environment variables,# which allow us to configure it at runtime. In this case,# we need to configure the port, app bundle, etcENV SITECORE_APP_NAME=my-jss-app-name# Relative to /jss path to the server bundle built by the JSS app build# Note: this path should be identical to the path deployed for integrated# mode, so that path references work correctly.ENV SITECORE_JSS_SERVER_BUNDLE=./dist/${SITECORE_APP_NAME}/server.bundle.js# Hostname of the Sitecore instance to retrieve layout data from.# host.docker.internal == DNS name of the docker host machine, # i.e. to hit non-container localhost Sitecore dev instanceENV SITECORE_API_HOST=http://host.docker.internalENV SITECORE_API_KEY=GUID-VALUE-HERE# Enable or disable debug console output (dont use in prod)ENV SITECORE_ENABLE_DEBUG=false# Set the _local_ port to run JSS on, within the container# (this does not expose it publicly)ENV PORT=3000# Tell Docker that we expose a port, but this is for documentation;# the port must be mapped when we start the container to be exposed.EXPOSE ${PORT}

Build the container

Once we have defined the steps necessary to create the container image, we need to build the container. Building the container:

  • Collects all the files in the Dockerfile directory and uploads them to the Docker host (unless listed in a .dockerignore file)
  • Acquires the base image, if it’s not already on the Docker host
  • Creates a container based on the base image and starts it
  • Executes your Dockerfile script within the container to configure it
  • Captures your Docker image and stores it for reuse

The Dockerfile does not execute locally, so make sure you don’t make that assumption when using EXEC directives; execution also occurs within the container being built, so it occurs in the context of the container (in this case, Debian) and the dependencies that are part of the container.

To build your JSS container, within the same folder as your Dockerfile run:

docker build -t your-image-name .

Once the build is done, you can find your image on Docker using:

docker images

Using the JSS Docker container

Up to this point we have collected and built the container, but nothing has been run. To create a new instance of your container and start it up, run

docker run -p 3000:3000 --name <pick-a-name-for-container-instance> <imagename>

The -p maps your localhost port 3000 to the container port 3000 (which we specified the Node host to run on previously using an environment variable).

Once you start the container, visiting http://localhost:3000 should run the app in the JSS headless host container.

Container Debugging Tips

  • Viewing running containers - list running containers using the docker ps command. If a container was started without an explicit --name, this can help find it.
  • Opening a shell in a container - to run diagnostic shell commands, you can open a root shell to a running container. The docker exec command lets you run commands, including starting a shell - for example, docker exec -it <container-name> bash. The -it says you want an interactive TTY (in other words an ongoing shell, not a one-off command execution and exit)

What’s Next?

In this post, we’ve created and run a Docker container of the JSS headless mode. This works great for a single container, but for production scenarios we would likely need to orchestrate multiple instances of the container to handle heavy load and provide redundancy. Next time, we will improve our container build script using a build container, then finally the series will end with orchestrating the container using Kubernetes.

]]>
<p>I’ve been playing with containers lately and as an experiment, containerized <a href="https://jss.sitecore.com/docs/fundamentals/applicat
Build 2019: All the things https://kamsar.net/index.php/2019/05/Build-2019-All-the-things/ 2019-05-10T23:01:11.000Z 2021-07-26T23:19:02.515Z I spent the first part of this week out at Build 2019, and I learned a lot! Here’s all the news I saw fit to print from Build in a concise, notesy format.

.NET Core 3

The next version of .NET Core will be released in September 2019. It will feature a raft of improvements, notably WPF/desktop app support (Windows only), .NET Standard 2.1 (not going to be supported by .NET 4.x ever), and C# 8 (.NET Standard 2.1 required).

Coinciding with the release of .NET Core 3 will be dotnetconf from September 23-25, a virtual conference highlighting .NET Core 3.

.NET Core 3.1, the long term support version, is slated to ship in November.

.NET 5 and the Unified BCL

After .NET Core 3 ships, .NET Core is dead. Instead .NET 5 will ship, and it will unify the abstraction of .NET Standard into a universal BCL that can run on any .NET 5 compatible runtime (i.e. Xamarin, Mono, Windows .NET). It will also gain Java and Swift interop capabilities (from Mono/Xamarin) on all platforms. The idea is that .NET 5 will be a singular platform that runs anywhere from mobile devices, to IoT/Raspberry Pi, to desktop apps, to cloud server(less).

Web Forms and WCF will never be ported to .NET Core/.NET 5. Specifically for Web Forms, Blazor will be the recommended migration path.

Following .NET 5, the .NET platform will have yearly releases (.NET 6, 7, 8, …). Alternating years will be LTS versions, in other words 2020’s .NET 5 will be supplanted by the LTS .NET 6 in 2021.

More on .NET 5 here

C# 8

Note: C# 8 requires compiler changes that need .NET Standard 2.1+. In other words, C# 8 can only be used with .NET Core 3 and later as a consumer!

The main focus of C# 8 is “robustness.” There are a number of new features that support this goal:

Async Enumerable

Ever since async/await was shipped in C#, it’s been problematic to use it with enumerables because you must await either Task<IEnumerable<T>> (thus awaiting the WHOLE enumerable, which loses its lazy enumeration advantages), or IEnumerable<Task<T>> which potentially requires awaiting in a loop, which is also suboptimal. It also prevents the use of yield return in async methods, which makes them significantly less pretty.

In C# 8, this is fixed by introducing IAsyncEnumerable<T>, an asynchronously enumerable enumerable type. This type is enumerated using await foreach, i.e. await foreach(var t in asyncEnumerable) { /* where t is not a task */ }. The implementation of IAsyncEnumerable is simply allowed to yield return values, giving the enumerator control over its own internal asynchrony needs, batching, etc.

Nullable Reference Types

The NullReferenceException is everyone’s favorite C# bugbear, and solutions good and bad abound for asserting that method arguments are not null to avoid throwing them (my favorite is var x = arg ?? throw new ArgumentNullException(nameof(arg));). In C# 8, you can explicitly declare reference types as nullable, explicitly stating that a method can return - or accept - a null value. Doing this allows the compiler to remove the need for all those assertions, as it can warn you at compile time if you’re not checking a nullable type for null before using it. This is an opt-in feature either with #nullable enable in a file, or it can be turned on per-project.

1
2
3
4
5
6
7
8
9
10
11
12
Item? GetItem() {
// ...
}

void DoStuff() {
// with nullable reference types on, this will throw a compiler warning
// because GetItem declared explicitly that it can return null
var troll = GetItem().Axes.GetDescendants();

// you can bypass the warning if you know what you're doing (lol) with !
var explodingTroll = GetItem()!.Axes.GetDescendants();
}

Range Expressions

A common need is to parse a string or array and split it up into pieces by index; for example “this string’s last two characters” or “the first 5 elements in this array.” This sort of code is quite vulnerable to naughty data input causing exceptions, for example "a".Substring(5) will throw because it isn’t 5 characters long.

C# 8 range expressions allow you to concisely and safely (they won’t throw if the array is shorter than the slice) express this sort of problem. They work using ^ to anchor the range to “length - x” or “start + x”, a spread, and an optional endpoint. A few examples:

1
2
3
4
var str = "hello world";
var a = str[0..5]; // 'hello'
var b = str[^1]; // 'd'
var c = str[^1..^4]; // 'ello w'

Switch Expressions

The switch statement receives an upgrade in C# 8 with the ability to assign it directly to a variable, eliminating the need for clumsy break statements in every case. It’s also possible to use pattern matching with this format (not pictured).

1
2
3
4
5
6
var result = "hello" switch {
"hello" => true,
"goodbye" => false,
_ => false // default case via a discard
};
// result = true

Default Interface Implementations

Interfaces can have default implementations for members. This is not intended to kill IoC containers as much as be a tool for API creators to ship additions to public interfaces without breaking existing consumers of that interface. The additional members need only be optionally implemented by downstream consumers, with the defaults used if not overridden.

Using Declarations

The using statement gets an overhaul to avoid needing a block scope. A using statement is used to prevent forgetting to dispose IDisposable resources, but before C# 8 it required a block scope of its own which especially in nested usings made things hard to read. In C# 8, you can define a variable with the using keyword and no block scope, and it is implicitly disposed at the end of the current block scope. For example:

1
2
3
4
public void Foo() {
using var file = new FileStream(@"c:\foo.txt");
// file will be disposed when the Foo() block exits
}

More on C# 8 here

TypeScript

In TypeScript 3.4 - currently RC - you can enable incremental builds (via tsconfig, or --incremental to the CLI), which allows TS to cache the output of the last build/watch run and essentially ‘rehydrate’ it during the next build to avoid rebuilding unchanged modules. The upshot of this on the VS Code codebase is that warm build times went from 47 seconds to 11 seconds.

On larger TypeScript codebases, using project references can allow TypeScript to partition compilation units, enabling it to only rebuild changed units in the dependency tree (about like projects and solutions in Visual Studio). This can be used to avoid needing to recompile an entire TypeScript project every time even without incremental builds.

New modernized TypeScript documentation, with content oriented around current TypeScript practices and improved clarity, is in process. Current target is late 2019 to release the new docs.

More on TypeScript 3.4 here

Live Share

Using Live Share developers can collaborate effectively while remote, with either Visual Studio, VS Code, or both. It’s a bit like a code-specific combination of screen sharing and collaborative editing. This includes things like:

  • Editing code in a Google Docs-like collaborative realtime editor
  • Setting breakpoints, controlling debugger execution that executes on the host’s computer
  • Reviewing the other dev’s localhost ports
  • VS Code can live share with Visual Studio
  • The viewer need not have a SDK, debugger, or plugins for the host’s code to participate - or even the same CPU architecture.
  • Works across platforms too, so you could connect using Code on a Mac to VS on Windows and control debugging a windows service, for example.

More on Live Share here

Visual Studio Code Remote Editing

Code can now connect to a remote system (via SSH or directly to a container) and edit the remote instance as if it were local files. This includes things like installing Code plugins on the remote environment - it’s basically connecting to a “headless” VS Code service. For example, you could write Ruby code on a Docker container in AKS from a windows machine running Code…without needing to set up a Ruby dev environment locally or install any Ruby plugins into Code. Or, do .NET Core dev on a remote VM without needing to install the .NET Core SDK locally.

Remote editing really shines when combined with Azure Dev Spaces (read on…).

More on remote editing here

Visual Studio Online (not to be confused with the old name for Azure DevOps)

  • Visual Studio Code in a browser
  • Edit code, review PRs, connect to remote development environments (SSH, Containers)
  • Works on any browser. Want to review PRs on an iPad? Sure!
  • In private preview at the moment

More on VSO here

VS Code and VS Tips

  • There were excellent sessions on getting more out of Visual Studio and VS Code. If you want to learn something about your tools, these were pretty great. For example:
  • You can configure Code to attach multiple debuggers at launch (i.e. Chrome + Node debuggers)
  • F1 opens the Code palette, in addition to Ctrl-P
  • You can quickly open files in Code from the command palette by removing the > prompt and typing a filename or expression
  • Jump to outlined functions, tags, etc the current file, in Code, by entering @: in the command palette (i.e. @:myfunc)
  • IntelliCode (available in Code and VS 16.1+) applies ML models to predict the most commonly used code completions for a given state. For example if(arrayVar. might suggest Length but stringVar. might suggest Split. The suggestions model was trained on 2000 of the most popular open source codebases on GitHub, so they’re based on actual community practices.
  • Visual Studio is aggressively rendering R# irrelevant in current previews. You really might not need it at all in the future. Tons of new refactorings, code cleanup improvements, ability to infer .editorconfig files, et al.

Code tips session
Visual Studio tips session
Visual Studio debugger/diags tips

Azure Dev Spaces

It’s no secret that microservice architectures can be pretty difficult to develop locally. Especially if they tend towards the distributed monolith antipattern ;) Well, Azure decided to do something about that. Probably the coolest demo of the whole event.

Dev Spaces is a prebuilt microservice-oriented workflow for developers based on Azure Kubernetes Service (AKS). The basic concept is that a dev team would share an AKS cluster across their whole team - because developers would probably be working on a few microservices, not the whole galaxy of the system, they can then basically “branch” specific microservices out for personal development, while referencing the rest of the system built from the latest CI build. In other words, there’s no need to mock or setup local microservices that you don’t care about, because yours runs in AKS and refers to the master build.

Even more bonkers, you can use remote development to debug and auto-deploy files to your personal microservices running in AKS. Pull requests can be made to similarly build in their own namespace, giving faster and more efficient use of build time. Seems like a pretty darn nice experience, with most of the orchestration issues no longer your problem.

Watch the session
Documentation on Dev Spaces

YAML Builds & Releases for Azure DevOps Pipelines

You can define your Azure DevOps build and release pipelines using YAML files that can be committed to the repository. This allows the build system to be versioned and stable across branches and enables proper testing of changes to the build via PRs. In preview now, release pipelines (in addition to builds) can be defined in YAML. There is also a visual editor that allows generation of YAML for common tasks using a GUI.

Coming soon, pipelines will be able to automatically generate a Kubernetes manifest and Helm chart for any project with a Dockerfile. This will also generate appropriate build YAML to allow building docker images, deploying them to Azure Container Registry, and spinning up the Kubernetes cluster from those images on Azure Kubernetes Service. Looks really easy to use, and a definite lowering of the barrier to entry to Kubernetes deployment. The pipeline doesn’t only support Azure k8s either; it can deploy to other container registries or k8s clusters on premise or in other clouds.

Azure search is gaining the ability to apply cognitive services to data being indexed; for example it can index the contents of images as detected by cognitive services, etc. Also the 1000-field limit that Sitecore users love is being investigated and may be raised or eliminated in a near term time frame.

ML.NET 1.0 & AutoML

A library to build and run machine learning models from within .NET. Can consume several trained model formats, including TensorFlow, which lets you integrate ML models built by a data scientist using Python et al into .NET runtimes. Microsoft is also working on the ONNX model format for interoperable models. While it’s capable of training its own models too, it is interesting to see the model promoted where data scientists do their modeling using mainstream ML tools (i.e. Python), and deploy only the trained model to the .NET application. Promising in terms of integrating .NET with mainstream data scientists.

The AutoML toolkit was also announced. AutoML is a nice looking tool for non-data-scientists to take a dataset and automatically discover a good ML algorithm and hyperparameter set to produce an accurate model. Definitely aimed at the backend developer looking to add a splash of ML to their toolset, as opposed to data scientists, but this seems like it could significantly lower the barrier to ML entry for .NET developers.

More on ML.NET and AutoML here

Go forth and code, me hearties.

]]>
<p>I spent the first part of this week out at <a href="https://www.microsoft.com/en-us/build">Build 2019</a>, and I learned a lot! Here’s al
Routing Sitecore links with JSS https://kamsar.net/index.php/2018/09/Routing-Sitecore-links-with-JSS/ 2018-09-12T18:59:43.000Z 2021-07-26T23:19:02.515Z When building a single-page app with Sitecore JSS and defining internal links in Sitecore content, you may notice that clicking the link in the JSS app does not act like a single page app. Instead the link click causes a full page refresh to occur, because the routing library used by the app is not aware that the link emitted by JSS can be treated as a route link.

Maybe you don’t want that to happen, because you like the fluidity of single-page apps or want to reduce bandwidth. Excellent! You’ve come to the right place.

The following examples use React, but the same architectural principles will translate well to Vue or Angular apps and the JSS field data schema is identical.

There are two places where we can receive links back from Sitecore:

Sitecore supports content fields that are explicitly hyperlinks (usually General Link fields, also referred to as CommonFieldTypes.GeneralLink in JSS disconnected data). When returned these fields contain link data (a href, optionally body text, CSS class, target, etc). In JSS apps, these are rendered using the Link component like so:

1
2
3
4
import { Link } from '@sitecore-jss/sitecore-jss-react';

export default MyJSSComponent = (props) =>
<Link field={props.fields.externalLink} />;

This gives us normal anchor tag output in the DOM:

1
<a href="/path">Link Text</a>

But in react-router, a link needs to be rendered using react-router-dom‘s Link component instead, for example:

1
2
3
4
import { Link } from 'react-router-dom';

export default RouterLinkComponent = (props) =>
<Link to="/path">Link Text</Link>;

To make JSS general links render using react-router links for internal links, we can create a component that conditionally chooses the link component like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
import React from 'react';
import { Link } from '@sitecore-jss/sitecore-jss-react';
// note we're aliasing the router's link component name, since it conflicts with JSS' link component
import { Link as RouterLink } from 'react-router-dom';

/** React component that turns Sitecore link values that start with / into react-router route links */
const RoutableSitecoreLink = (props) => {
const hasValidHref = props.field && props.field.value && props.field.value.href;
const isEditing = props.editable && props.field.editable;

// only want to apply the routing link if not editing (if editing, need to render editable link value)
if(hasValidHref && !isEditing) {
const value = props.field.value;

// determine if a link is a route or not. This logic may not be appropriate for all usages.
if(value.href.startsWith('/')) {
return (
<RouterLink to={value.href} title={value.title} target={value.target} className={value.class}>
{props.children || value.text || value.href}
</RouterLink>
);
}
}

return <Link {...props} />;
};

// usage - drop-in replacement for JSS' Link component
export default MyJSSComponent = (props) =>
<RoutableSitecoreLink field={props.fields.externalLink} />;

With this component, now your internal link values will be turned into router links and result in only a new fetch of route data instead of a page refresh!

Rich Text Fields

Rich Text fields are a more interesting proposition because they contain free text that is placed into the DOM, and we cannot inject RouterLink components directly into the HTML blob. Instead we can use React’s DOM access to attach an event handler to the rich text markup after it’s rendered by React that will trigger route navigation.

Similar to the general link field handling, we can wrap the JSS default RichText component with our own component that selects whether to bind the route handling events based on whether we’re editing the page or not:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
import React from 'react';
import ReactDOM from 'react-dom';
import { RichText } from '@sitecore-jss/sitecore-jss-react';
import { withRouter } from 'react-router-dom';

/** Binds route handling to internal links within a rich text field */
class RouteLinkedRichText extends React.Component {
constructor(props) {
super(props);

this.routeHandler = this.routeHandler.bind(this);
}

// handler function called on click of route links
// pushes the click into the router history thus changing the route
// props.history comes from the react-router withRouter() higher order component.
routeHandler(event) {
event.preventDefault();
this.props.history.push(event.target.pathname);
}

// rebinds event handlers to route links within this component
// fired both on mount and update
bindRouteLinks() {
const hasText = this.props.field && this.props.field.value;
const isEditing = this.props.editable && this.props.field.editable;

if(hasText && !isEditing) {
const node = ReactDOM.findDOMNode(this);
// selects all links that start with '/' - this logic may be inappropriate for some advanced uses
const internalLinks = node.querySelectorAll('a[href^="/"]');

internalLinks.forEach((link) => {
// the component can be updated multiple times during its lifespan,
// and we don't want to bind the same event handler several times so unbind first
link.removeEventListener('click', this.routeHandler, false);
link.addEventListener('click', this.routeHandler, false);
});
}
}

// called once when component is created
componentDidMount() {
this.bindRouteLinks();
}

// called if component data changes _after_ created
componentDidUpdate() {
this.bindRouteLinks();
}

render() {
// strip the 'staticContext' prop from withRouter()
// to avoid confusing React before we pass it down
const { staticContext, ...props } = this.props;

return <RichText {...props} />;
}
};

// augment the component with the react-router context using withRouter()
// this gives us props.history to push new routes
RouteLinkedRichText = withRouter(RouteLinkedRichText);

// usage - drop-in replacement for JSS' RichText component
export default MyJSSComponent = (props) =>
<RouteLinkedRichText field={props.fields.richText} />;

Now internal links entered in rich text fields will also be treated as route links.

Advanced Usages

These examples use simple internal link detection that consists of “starts with /.” There are some edge cases that can defeat simple link detection, such as:

  • Scheme-insensitive links (//google.com) that are HTTP or HTTPS depending on the current page. These are an antipattern; encrypt all your resources.
  • Links to static files (i.e. media files).

For use cases such as this, more advanced detection of internal links may be required that is situational for your implementation.

]]>
<p>When building a single-page app with <a href="https://jss.sitecore.net/">Sitecore JSS</a> and defining internal links in Sitecore content
Code splitting with Sitecore JSS + React https://kamsar.net/index.php/2018/08/Code-splitting-with-Sitecore-JSS-React/ 2018-08-01T15:43:31.000Z 2021-07-26T23:19:02.514Z Page weight - how much data a user needs to download to view your website - is a big deal in JavaScript applications. The more script that an application loads, the longer it takes to render for a user - especially in critical mobile scenarios. The longer it takes an app to render, the less happy the users of that app are. JavaScript is especially important to keep lightweight, because JS is not merely downloaded like an image - it also has to be parsed and compiled by the browser. Especially on slower mobile devices, this parsing can take longer than the download! So less script is a very good thing.

Imagine a large Sitecore JSS application, with a large number of JavaScript components. With the default JSS applications the entire app JS must be deployed to the user when any page in the application loads. This is simple to reason about and performs well with smaller sites, but on a large site it is detrimental to performance if the home page must load 40 components that are not used on that route in order to render.

Enter Code Splitting

Code Splitting is a term for breaking up your app’s JS into several chunks, usually via webpack. There are many ways that code splitting can be set up, but we’ll focus on two popular automatic techniques: route-level code splitting, and component-level code splitting.

Route-level code splitting creates a JS bundle for each route in an application. Because of this, it relies on the app using static routing - in other words knowing all routes in advance, and having static components on those routes. This is probably the most widespread code splitting technique, but it is fundamentally incompatible with JSS because the app’s structure and layout is defined by Sitecore. We do not know all of the routes that an app has at build time, nor do we know which components are on those routes because that is also defined by Sitecore.

Component-level code splitting creates a JS bundle for each component in an application. This results in quite granular bundles, but overall excellent compatibility with JSS because it works great with dynamic routing - we only need to load the JS for the components that an author has added to a given route, and they’re individually cacheable by the browser providing great caching across routes too.

Component-level Code Splitting with React

The react-loadable library provides excellent component-level code splitting capabilities to React apps. Let’s add it to the JSS React app and split up our components!

Step 1: Add react-loadable

We need some extra npm packages to make this work.

1
2
3
4
5
6
7
// yarn
yarn add react-loadable
yarn add babel-plugin-syntax-dynamic-import babel-plugin-dynamic-import-node --dev

// npm
npm i react-loadable
npm i babel-plugin-syntax-dynamic-import babel-plugin-dynamic-import-node --save-dev

Step 2: Make the componentFactory use code splitting

In order to use code splitting, we have to tell create-react-app (which uses webpack) how to split our output JS. This is pretty easy using dynamic import, which works like a normal import or require but loads the module lazily at runtime. react-loadable provides a simple syntax to wrap any React component in a lazy-loading shell.

In JSS applications, the Component Factory is a mapping of the names of components into the implementation of those components - for example to allow the JSS app to resolve the component named 'ContentBlock', provided by the Sitecore Layout Service, to a React component defined in ContentBlock.js. The Component Factory is a perfect place to put component-level code splitting.

In a JSS React app, the Component Factory is generated code by default - inferring the components to register based on filesystem conventions. The /scripts/generate-component-factory.js file defines how the code is generated. The generated code - created when a build starts - is emitted to /src/temp/componentFactory.js. Before we alter the code generator to generate split components, let’s compare registering a component in each way:

JSS React standard componentFactory.js
1
2
3
4
5
6
// static import
import ContentBlock from '../components/ContentBlock';

// create component map (identical code)
const components = new Map();
components.set('ContentBlock', ContentBlock);
react-loadable componentFactory.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import React from 'react';
import Loadable from 'react-loadable';

// loadable dynamic import component - lazily loads the component implementation when it is first used
const ContentBlock = Loadable({
// setting webpackChunkName lets us have a nice chunk filename like ContentBlock.hash.js instead of 1.hash.js
loader: () => import(/* webpackChunkName: "ContentBlock" */ '../components/ContentBlock'),
// this is a react component shown while lazy loading. See the react-loadable docs for guidance on making a good one.
loading: () => <div>Loading...</div>,
// this module name should match the webpackChunkName that was set. This is used to determine dependency during server-side rendering.
modules: ['ContentBlock'],
});

// create component map (identical code)
const components = new Map();
components.set('ContentBlock', ContentBlock);

Updating the Component Factory Code Generation

In order to have our component factory use splitting, let’s update the code generator to emit react-loadable component definitions.

Modify /scripts/generate-component-factory.js:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
// add this function
function LoadableComponent(importVarName, componentFolder) {
return `const ${importVarName} = Loadable({
loader: () => import(/* webpackChunkName: "${componentFolder}" */ '../components/${componentFolder}'),
loading: () => <div>Loading...</div>,
modules: ['${componentFolder}'],
});`;
}

// modify generateComponentFactory()...

// after const imports = [];
imports.push(`import React from 'react';`);
imports.push(`import Loadable from 'react-loadable';`);

// change imports.push(``import ${importVarName} from '../components/${componentFolder}';``); to
imports.push(LoadableComponent(importVarName, componentFolder));

You can find a completed gist of these changes here. Search in it for [CS] to see each change in context. Don’t copy the whole file, in case of future changes to the rest of the loader.

Try it!

Start your app up with jss start. At this point Code Splitting should be working: you should see a JS file get loaded for each component on a route, and a short flash of Loading... when the route initially loads.

But it still has some issues that could make it more usable. If the app is server-side rendered in headless or integrated modes none of the content will be present because the dynamic imports are asynchronous and have not resolved before the SSR completes. We’d also love to avoid that flash of loading text if the page was server-side rendered, too. Well guess what, we can do all of that!

Step 3: Configure code splitting for Server-Side Rendering

Server-side rendering with code splitting is a bit more complex. There are several pieces that the app needs to support:

  • Preload all lazy loaded components, so that they render immediately during server-side rendering instead of starting to load async and leaving a loading message in the SSR HTML.
  • Determine which lazy loaded components were used during rendering, so that we can preload the same components’ JS files on the client-side to avoid the flash of loading text.
  • Emit <script> tags to preload the used components’ JS files on the client side into the SSR HTML.

3.1: Configure SSR Webpack to understand dynamic import

The build of the server-side JS bundle is separate from the client bundle. We need to teach the server-side build how to compile the dynamic import expressions. Open /server/server.webpack.config.js.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// add these after other imports
const dynamicImport = require('babel-plugin-syntax-dynamic-import');
const dynamicImportNode = require('babel-plugin-dynamic-import-node');
const loadableBabel = require('react-loadable/babel');

// add the plugins to your babel-loader section
//...
use: {
loader: 'babel-loader',
options: {
babelrc: false,
presets: [env, stage0, reactApp],
// [CS] ADDED FOR CODE SPLITTING
plugins: [dynamicImport, dynamicImportNode, loadableBabel],
},

You can find a completed gist of these changes here. Search in it for [CS] to see each change in context. Don’t copy the whole file, in case of future changes to the rest of the webpack config.

3.2: Configure server.js

The /server/server.js is the entry point to the JSS React app when it’s rendered on the server-side. We need to teach this entry point how to successfully execute SSR with lazy loaded components, and to emit preload script tags for used components.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
// add to the top
import Loadable from 'react-loadable';
import manifest from '../build/asset-manifest.json';

function convertLoadableModulesToScripts(usedModules) {
return Object.keys(manifest)
.filter((chunkName) => usedModules.indexOf(chunkName.replace('.js', '')) > -1)
.map((k) => `<script src="${manifest[k]}"></script>`)
.join('');
}

// add after const graphQLClient...
const loadableModules = [];

// add after initializei18n()...
.then(() => Loadable.preloadAll())

// wrap the `<AppRoot>` component with the loadable used-component-capture component
<Loadable.Capture report={(module) => loadableModules.push(module)}>
<AppRoot path={path} Router={StaticRouter} graphQLClient={graphQLClient} />
</Loadable.Capture>

// append another .replace() to the rendered HTML transformations
.replace('<script>', `${convertLoadableModulesToScripts(loadableModules)}<script>`);

You can find a completed gist of these changes here with better explanatory comments. Search in it for [CS] to see each change in context. Don’t copy the whole file, in case of future changes to the rest of the entry point.

3.3: Configure client-side index.js

The /src/index.js is the entry point to the JSS React app when it’s rendered on the browser-side. We need to teach this entry point how to wait to ensure that all preloaded components that SSR may have emitted to the page are done loading before we render the JSS app the first time to avoid a flash of loading text.

1
2
3
4
5
// add to the top
import Loadable from 'react-loadable';

// add after i18ninit()
.then(() => Loadable.preloadReady())

You can find a completed gist of these changes here. Search in it for [CS] to see each change in context. Don’t copy the whole file, in case of future changes to the rest of the entry point.

Step 4: Try it out

With the code changes to enable splitting complete, deploy your app to Sitecore and try it in integrated mode. You should see the SSR HTML include a script tag for every component used on the route, and the rendering will wait until the components have preloaded before showing the application. This preloading means the browser does not have to wait for React to boot up before beginning load of the components, resulting in a much faster page load time.

The ideal component loading technique for each app will be different depending on the number and size of each component. Using the standard JSS styleguide sample app, enabling component code-splitting like this resulted in transferring almost 40k less data when loading the home page (which has a single component) vs the styleguide page (which has many components). This difference increases with the total number of components in a JSS app - but for most apps, code splitting is a smart idea if the app has many components that are used on only a few pages.

Sources

]]>
<p>Page weight - how much data a user needs to download to view your website - is a big deal in JavaScript applications. The more script tha
Deploying Disconnected JSS Apps https://kamsar.net/index.php/2018/07/Deploying-Disconnected-JSS-Apps/ 2018-07-27T15:16:07.000Z 2021-07-26T23:19:02.514Z It’s possible to deploy server-side rendered Sitecore JSS sites in disconnected mode. When deployed this way, the JSS app will run using disconnected layout and content data, and will not use a Sitecore backend.

Why would I want this?

In a word, previewing. Imagine during early development and prototyping of a JSS implementation. There’s a team of designers, UX architects, and frontend developers who are designing the app and its interactions. In most cases, Sitecore developers may not be involved yet - or if they are involved, there is no Sitecore instance set up.

This is one of the major advantages of JSS - using disconnected mode, a team like this can develop non-throwaway frontend for the final JSS app. But stakeholders will want to review the in-progress JSS app somewhere other than http://localhost:3001, so how do we put a JSS site somewhere shared without having a Sitecore backend?

Wondering about real-world usage?
The JSS docs use this technique.

How does it work?

Running a disconnected JSS app is a lot like headless mode: a reverse proxy is set up that proxies incoming requests to Layout Service, then transforms the result of the LS call into HTML using JS server-side rendering and returns it. In the case of disconnected deployment instead of the proxy sending requests to the Sitecore hosted Layout Service, the requests are proxied to the disconnected layout service.

Setting up a disconnected app step by step

To deploy a disconnected app you’ll need a Node-compatible host. This is easiest with something like Heroku or another PaaS Node host, but it can also be done on any machine that can run Node. For our example, we’ll use Heroku.

Configuring the app for disconnected deployment

Any of the JSS sample templates will work for this technique. Create yourself a JSS app with the CLI in 5 minutes if you need one to try.

  1. Ensure the app has no scjssconfig.json in the root. This will make the build use the local layout service.
  2. Create a build of the JSS app with jss build. This will build the artifacts that the app needs to run.
  3. Install npm packages necessary to host a disconnected server: yarn add @sitecore-jss/sitecore-jss-proxy express (substitute npm i --save if you use npm instead of yarn)
  4. Deploy the following code to /scripts/disconnected-ssr.js (or similar path). Note: this code is set up for React, and will require minor tweaks for Angular or Vue samples (build -> dist)
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    const express = require('express');
    const { appName, language, sitecoreDistPath } = require('../package.json').config;
    const scProxy = require('@sitecore-jss/sitecore-jss-proxy').default;
    const { createDefaultDisconnectedServer } = require('@sitecore-jss/sitecore-jss-dev-tools');
    const app = require('../build/server.bundle');

    const server = express();

    // the port the disconnected app will run on
    // Node hosts usually pass a port to run on using a CLI argument
    const port = process.argv[2] || 8080;

    // create a JSS disconnected-mode server
    createDefaultDisconnectedServer({
    port,
    appRoot: __dirname,
    appName,
    language,
    server,
    afterMiddlewareRegistered: (expressInstance) => {
    // to make disconnected SSR work, we need to add additional middleware (beyond mock layout service) to handle
    // local static build artifacts, and to handle SSR by loopback proxying to the disconnected
    // layout service on the same express server

    // Serve static app assets from local /build folder into the sitecoreDistPath setting
    // Note: for Angular and Vue samples, change /build to /dist to match where they emit build artifacts
    expressInstance.use(
    sitecoreDistPath,
    express.static('build', {
    fallthrough: false, // force 404 for unknown assets under /dist
    })
    );

    const ssrProxyConfig = {
    // api host = self, because this server hosts the disconnected layout service
    apiHost: `http://localhost:${port}`,
    layoutServiceRoute: '/sitecore/api/layout/render/jss',
    apiKey: 'NA',
    pathRewriteExcludeRoutes: ['/dist', '/build', '/assets', '/sitecore/api', '/api'],
    debug: false,
    maxResponseSizeBytes: 10 * 1024 * 1024,
    proxyOptions: {
    headers: {
    'Cache-Control': 'no-cache',
    },
    },
    };

    // For any other requests, we render app routes server-side and return them
    expressInstance.use('*', scProxy(app.renderView, ssrProxyConfig, app.parseRouteUrl));
    },
    });
  5. Test it out. From a console in the app root, run node ./scripts/disconnected-ssr.js. Then in a browser, open http://localhost:8080 to see it in action!

Deploying the disconnected app to Heroku

Heroku is a very easy to use PaaS Node host, but you can also deploy to Azure App Service or any other service that can host Node. To get started, sign up for a Heroku account and install and configure the Heroku CLI.

  1. We need to tell Heroku to build our app when it’s deployed.
    • Locate the scripts section in the package.json
    • Add the following script:
      1
      "postinstall": "npm run build"`
  2. We need to tell Heroku the command to use to start our app.
    • Create a file in the app root called Procfile
    • Place the following contents:
      1
      web: node ./scripts/disconnected-ssr.js $PORT
  3. To deploy to Heroku, we’ll use Git. Heroku provides us a Git remote that we can push to that will deploy our app. To use Git, we need to make our app a Git repository:
    1
    2
    3
    git init
    git add -A
    git commit -m "Initial commit"
  4. Create the Heroku app. This will create the app in Heroku and configure the Git remote to deploy to it. Using a console in your app root:
    1
    heroku create <your-heroku-app-name>
  5. Configure Heroku to install node devDependencies (which we need to start the app in disconnected mode). Run the following command:
    1
    heroku config:set NPM_CONFIG_PRODUCTION=false YARN_PRODUCTION=false
  6. Deploy the JSS app to Heroku:
    1
    git push -u heroku master
  7. Your JSS app should be running at https://<yourappname>.herokuapp.com!

In case it’s not obvious, do not use this setup in production. The JSS disconnected server is not designed to handle heavy production load.

]]>
<p>It’s possible to deploy server-side rendered <a href="https://jss.sitecore.net/">Sitecore JSS</a> sites in <a href="https://jss.sitecore.
Announcing Sitecore JSS XSLT Support https://kamsar.net/index.php/2018/04/Announcing-Sitecore-JSS-XSLT-Support/ 2018-04-01T07:07:06.000Z 2021-07-26T23:19:02.513Z Sitecore Team X is proud to announce the final public release of Sitecore JavaScript Services (JSS) with full XSLT 3.0 support!

Why XSLT 3.0?

XSLT 3.0 allows for JSON transformations, so you can use the full power of modern JSS while retaining the XSLT developer experience that Site Core developers know and love.

Our XSLT 3.0 engine allows for client-side rendering by transforming hard-to-read JSON into plain, sensible XML using XSLT 3.0 standards-compliant JSON-to-XML transformations. Instead of ugly JSON, your JSS renderings can simple, easy to read XML like this:

1
2
3
4
5
6
7
8
9
10
<j:map xmlns:j="http://www.w3.org/2013/XSL/json">
<j:map key="fields">
<j:map key="title">
<j:map key="value">
<j:string key="editable">SiteCore Experience Platform + JSS + XSLT</j:string>
<j:string key="value">SiteCore Experience Platform + JSS + XSLT</j:string>
</j:map>
</j:map>
</j:map>
</j:map>

It’s just as simple to make a JSS XSLT to transform your rendering output. Check out this super simple “hello world” sample:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:math="http://www.w3.org/2005/xpath-functions/math"
xmlns:xd="http://www.oxygenxml.com/ns/doc/xsl"
xmlns:SiteCore="http://www.SiteCore.com/demandMore/xslt"
xmlns:h="http://www.w3.org/1999/xhtml"
xmlns:fn="http://www.w3.org/2005/xpath-functions"
xmlns:j="http://www.w3.org/2005/xpath-functions"
exclude-result-prefixes="xs math xd h SiteCore"
version="3.0"
expand-text="yes"
>
<xsl:output method="text" indent="yes" media-type="text/json" omit-xml-declaration="yes"/>
<xsl:variable name="fields-a" select="json-to-xml(/)"/>
<xsl:template match="/">
<xsl:variable name="fields-b">
<xsl:apply-templates select="$fields-a/*"/>
</xsl:variable>
{xml-to-json($fields-b,map{'indent':true()})}
</xsl:template>
<xsl:template match="/j:map">
<j:map>
<j:array key="fields">
<xsl:apply-templates select="j:map[@key='fields']/j:map" mode="rendering"/>
</j:array>
</j:map>
</xsl:template>
<xsl:template match="j:map" mode="rendering">
<j:map>
<j:string key="title">{j:string[@key='title:value']||' '||j:string
[@key='title:editable']}</j:string>
<xsl:if test="j:boolean [@key='experienceEditor']">
<j:string key="editable">{j:string
[@key='editable']/text()}</j:string>
</xsl:if>
</j:map>
</xsl:template>
</xsl:stylesheet>

JSON transformations allow XSLT 3.0 to be a transformative force on the modern web. Expect to see recruiters demand 10 years of XSLT 3.0 experience for Site-core candidates within the next year - this is a technology you will not want to miss out on learning.

Dynamic XSLT with VBScript

Modern JavaScript is way too difficult, so we’ve implemented a feature that lets you define dynamic XSLT templates using ultra-modern VBScript:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:math="http://www.w3.org/2005/xpath-functions/math"
xmlns:xd="http://www.oxygenxml.com/ns/doc/xsl"
xmlns:SiteCore="http://www.SiteCore.com/demandMore/xslt"
xmlns:h="http://www.w3.org/1999/xhtml"
xmlns:fn="http://www.w3.org/2005/xpath-functions"
xmlns:j="http://www.w3.org/2005/xpath-functions"
exclude-result-prefixes="xs math xd h SiteCore"
version="3.0"
expand-text="yes"
>
<xsl:output method="text" indent="yes" media-type="text/json" omit-xml-declaration="yes"/>
<xsl:variable name="fields-a" select="json-to-xml(/)"/>
<xsl:template match="/">
<xsl:variable name="fields-b">
<xsl:apply-templates select="$fields-a/*"/>
</xsl:variable>
{xml-to-json($fields-b,map{'indent':true()})}
</xsl:template>
<xsl:template match="/j:map" type="text/vbscript">
Dim txtValue = xmlns.SiteCore.rendering.title.value
Dim txtEditable = xmlns.SiteCore.rendering.title.value

Sub xsltRender(txtVBProgrammers, txtAreDim)
document.write(txtVBProgrammers)
End Sub
</xsl:template>
<xsl:template match="j:map" mode="rendering">
<j:map>
<j:string key="title">{j:string[@key='title:value']||' '||j:string
[@key='title:editable']}</j:string>
<xsl:if test="j:boolean [@key='experienceEditor']">
<j:string key="editable">{j:string
[@key='editable']/text()}</j:string>
</xsl:if>
</j:map>
</xsl:template>
</xsl:stylesheet>

With a quick piece of simple VBScript like that, you’ll be making awesome JSS pages like this one in no time!

How can I get this XSLT goodness?

Glad you asked. Download it right here!

What’s next for JSS

The Sitecore JSS team is always looking for opportunities to improve JSS and make it compatible with the most modern technologies. Experimentation is already under way to add ColdFusion scripting support for XSLT 3 JSS renderings, and enable PHP server-side rendering for your SiteCore solutions.

Just another way that we help you succeed in your Site core implementations.

Send us feedback on our roadmap

]]>
<p>Sitecore Team X is proud to announce the final public release of Sitecore JavaScript Services (JSS) with full XSLT 3.0 support!</p> <h2 i
The lazy developer's way to install Sitecore 9 https://kamsar.net/index.php/2017/11/The-lazy-way-to-install-Sitecore-9/ 2017-11-02T00:37:11.000Z 2021-07-26T23:19:02.512Z Since Sitecore 9 was released, there’s been a lot of talk about the new installation techniques that it necessitates - namely, the move towards infrastructure as code and the Sitecore Install Framework (SIF). It’s no secret that installing Sitecore 9 can be a bit more difficult than previous versions, but it really doesn’t have to be.

This is the part where you might be expecting me to announce some crazy script I wrote, but not this time because someone else already did the work. So let’s address the elephant in the room.

Solr the easy way

Back in the day I wrote some scripts to install Solr using Bitnami. It worked, but I’d always wanted to find the time to make it simpler and less dependent on Bitnami and their notoriously hard to find older versions. Well Jeremy Davis did exactly what I wanted to do and scripted the whole Solr install, locally trusted SSL certificate, and installation as a service. You can also just skip straight to the gist of the PowerShell you need to run.

Seriously, it’s awesome and you should use it especially for local dev setups.

A few things I noted when I used it:

  • Change the download URLs for Solr and NSSM to be https (encrypted). They work fine that way.
  • You must install the Java Runtime Environment (JRE) first and plug in the right version - it won’t do it for you
  • Make sure to add the $SolrHost value to your hosts file before you run the script so that it can resolve with the SSL certificate correctly (it will be bound to that name; don’t use localhost).

SIF the easy way with SIFless

SIF is a pretty amazing tool, but it has two shortcomings: one, that it’s great for automated infrastructure but not so great for a quick local setup and two, that it doesn’t yet have an uninstall feature. Well Rob Ahnemann wrote a handy GUI for SIF called SIFless that fixes both of those issues, making quick setups with mostly default settings easy and generating hackable SIF PowerShell scripts that let you do whatever advanced things you want after using the GUI to get started. And it generates uninstall scripts too that get rid of the windows services, solr cores, and other artifacts that are left when you want to tear down that test site.

A few things to be aware of with SIFless:

  • Despite the amusing name, SIFless does require SIF to be installed!
  • The Solr URL needs to be the path to the Solr admin panel (e.g. not https://mysolr:8983, but https://mysolr:8983/solr)
  • The Solr physical path needs to be to the root of the Solr instance (if it’s the right place you’ll see ‘bin’ and ‘server’ folders; if you used the script above with defaults this would be C:\solr\solr-6.6.2)

Go forth and use Sitecore 9

Using these two tools I went from having no Solr and no Sitecore installations to having a fully operational battle station Sitecore 9 instance with xConnect in about 45 minutes. And that includes debugging my own silly mistakes. I bet you can do it faster. Get thee to a PowerShell console!

time to get SIFty

]]>
<p>Since Sitecore 9 was released, there’s been a lot of talk about the new installation techniques that it necessitates - namely, the move t
I'm a Sitecorian! https://kamsar.net/index.php/2017/10/I-m-a-Sitecorian/ 2017-10-23T20:01:58.000Z 2021-07-26T23:19:02.512Z I am excited to announce that I am joining the Sitecore product team as a Platform Architect!

wth

Now normally this wouldn’t merit a whole blog post, and we’d just let the recruiters find out about it on LinkedIn. But I’m sure many folks’ next question would be around all the libraries that I maintain and what will happen to them. So let’s address the elephant in the room:

not a thing

Unicorn, Rainbow, and Dianoga

These will continue exactly as they are today as independent, community driven projects. I will still be the maintainer. The license will remain MIT.

This also includes the dependency libraries that these projects use (e.g. Configy, WebConsole, MicroCHAP).

Synthesis & Leprechaun

Ok hold up: let’s first define what Leprechaun is because I haven’t publicly spoken about it yet. It’s a stable command-line code generator that works from Rainbow serialized items. Kinda like the T4 templates that a lot of people use except that it’s better because:

  • Uses Roslyn and C# Scripting, so it can run outside Visual Studio (e.g. on a CI server)
  • Ridiculously faster than T4
  • Has a watch mode that provides instant regeneration when saving templates in Sitecore
  • Uses the same configuration system as Unicorn does, so it’s familiar and simple to configure

Leprechaun is currently working in production on a couple sites, but does not have complete documentation so it may require a bit more spelunking to use. Currently it supports Synthesis out of the box, but it’s easy to add or change code generation templates.

Ok back to what’s happening to these projects. For the last year or so it’s been difficult to come up with the time and inclination to give Synthesis and Leprechaun the love they deserve. In order to get them that love, I am ceding maintainership to the excellent Ben Lipson. Ben is talented developer and Sitecore MVP with a lot of good ideas about where to take these tools. He’ll do a great job.

Aside from transferring the repositories to Ben, nothing else is changing.

Will you be disappearing from the community?

nope

No. #venting 4lyfe.

What will you be working on at Sitecore?

I’ll be on Team X, led by the illustrious Alex Shyba. In other words, if I told you I’d have to kill you.

actually it's JSS

/giphy #magic8ball "Will this be awesome?"

yes

]]>
<p>I am excited to announce that I am joining the Sitecore product team as a Platform Architect! </p> <p><img src="https://media.giphy.com/m
Quickly add SSL to Solr https://kamsar.net/index.php/2017/10/Quickly-add-SSL-to-Solr/ 2017-10-23T15:20:39.000Z 2021-07-26T23:19:02.512Z There have been several people recently who I’ve seen having trouble setting up SSL for their Solr in order to use it with Sitecore 9. So, I present the following gist to you. It’s designed to automate the complete setup process of adding SSL to Solr with a self-signed certificate, and trusting that self-signed certificate. For production setups with a real certificate, it should be quite easy to modify.

It’s been tested on standalone as well as Bitnami Solr. The script requires Windows 10 to use the Import-PfxCertificate cmdlet; if you don’t have that you can remove the trust scripting and do it manually.

giphy

]]>
<p>There have been several people recently who I’ve seen having trouble setting up SSL for their Solr in order to use it with Sitecore 9. So
All about xConnect Security https://kamsar.net/index.php/2017/10/All-about-xConnect-Security/ 2017-10-22T20:57:18.000Z 2021-07-26T23:19:02.507Z Sitecore 9 introduces the new xConnect server to the ecosystem. xConnect is an abstracted service layer that Sitecore uses for all its analytics and marketing automation features. If you’re using Sitecore XP (aka xDB), you’ll need an xConnect server if you upgrade to Sitecore 9.

xConnect is noteworthy because it introduces client certificate authentication for the Sitecore XP server to communicate with xConnect. Certificates are a complex subject, and can fail in any number of less than helpful ways. This post aims to help you understand how certificates work in Sitecore 9, and provide you some tools to diagnose what’s wrong when they are not working right.

What is TLS?

In order to understand how xConnect works, it’s important to understand what’s going on: Transport Layer Security (TLS). You may also think of this as “SSL” or “HTTPS.”

TLS is a protocol for establishing secure encrypted connections between a server and a client. The key aspect of TLS is that the client and server can securely exchange encryption keys in such a way that they cannot be observed by malicious parties that may be watching the exchange.

Asymmetric vs Symmetric Encryption

To understand how TLS works, it’s important to understand the distinction between Asymmetric (also called Public Key) Encryption, and Symmetric Encryption.

If you ever made secret codes as a kid, you’ve probably used symmetric encryption. This is where the sender and receiver both need to know a key to decrypt the message, for example a simple shift cipher where D = A, E = B, and so forth. Julius Caesar famously sent secret messages by shifting letters three places forward like this. Symmetric encryption does have one major downfall, however: posession of the secret key lets you read any encrypted message even if not the intended recipient.

Asymmetric encryption on the other hand uses two different keys: a public key and a private key. The public key can be shared with anyone without compromising anything. However a client can use the public key to encrypt a message in such a way that it can only be decrypted with the server’s private key. In this way, you can receive private encrypted messages from clients you don’t share any secrets with - but they can still send the server private messages.

TLS uses asymmetric encryption to transfer an encryption key for symmetric encryption, which is used for ongoing data transfer over the encrypted connection. This is done because asymmetric encryption is much much slower than symmetric.

It’s important to understand the difference between public and private keys when you set up Sitecore 9, because they need to be deployed to different servers in your infrastructure. A certificate generally includes both a public and private key, however it can also include only a public key.

xConnect Setup

xConnect uses mutual authentication to secure the connections between it and the Sitecore XP server. This is accomplished using TLS client certificates.

If you’ve worked with SSL certificates before, this is a stronger form of SSL where not only does the client have to trust the server, but the server also has to trust a second certificate issued to the client. In this case, the client is the Sitecore XP server, and the server is the xConnect server. Let’s take a look at how this works:

SSL Server Certificate Negotiation

All SSL connections go through this process, whether xConnect or otherwise. In a standard Sitecore 9 XP installation, the xConnect server will have the server certificate installed. The Sitecore XP server will only have a server certificate if access to Sitecore itself, e.g. for administration, is done via SSL (in which case it will likely be a separate server certificate from xConnect’s).

  1. Client prepares to make a HTTPS request (e.g. you ask for https://xconnect)
  2. Client sends a ClientHello message to the server. This proposes encryption standards, among other things.
  3. The server replies with a ServerHello message back to the client. This includes the server’s public key, and the encryption standards that the server has selected from what the client proposed in the ClientHello.
  4. The client validates the server certificate (e.g. must have correct domain and trusted issuer)
  5. A symmetric encryption key is generated and exchanged using the server’s public key
  6. Now that an encrypted connection is established, a normal HTTP request is sent over the encrypted channel

What can go wrong with server certificate negotiation

The most common issues are domain mismatches and untrusted certificates. Generally you can diagnose issues with server certificates using a web browser - request the site over HTTPS and review the error shown in the browser. Make sure you request the xConnect server URL, not the Sitecore XP URL if you are diagnosing an xConnect connectivity issue.

Domain Mismatches

A domain mismatch occurs when a certificate’s domain does not match the domain being requested. For example, a certificate issued to sitecore.net will fail this validation if the site you’re requesting is https://foo.local. Certificates may also be issued using wildcards (e.g. *.sitecore.net). Note that wildcards apply to one level of subdomains only - so in the previous example sitecore.net or foo.sitecore.net would be valid, but bar.foo.sitecore.net would not be.

Domain matching is done based on the host header the server receives. For example if the xConnect server is https://xconnect but can also be accessed via https://127.0.0.1, the certificate will be invalid if the IP address is used because the certificate was not issued for 127.0.0.1.

If you have a domain mismatch issue, you will need to either get a new certificate (and update the xConnect IIS site(s) to use the new certificate) or change the domain for xConnect to one that is valid for the certificate.

Untrusted Certificates

To understand trust issues, it’s important to understand how certificates are issued. Certificates are issued by other certificates.

In fact, certificates can be issued in chains (Xzibit would definitely approve). Trust issues occur when the certificate that issued the server certificate is not considered to be trusted by the client. On Windows, trust is established by being included in the Trusted Root Certification Authorities in the machine certificates:

Note that to trust a certificate, only the public key for the server certificate must be imported here. If you’re using self-signed certificates that issued themselves - like localhost in the screenshot - you can add the certificate itself to the trusted root certificates by exporting it and reimporting it into the root certificates. If using a commercially issued certificate, that certification authority’s root certificates must be added to the trusted root - in most cases, they are already present.

More esoteric errors

There are some less common issues that can also cause server certificate negotiation errors. Servers will be commonly secured against supporting vulnerable ciphers, hash algorithms or SSL protocol versions. You might have heard of Heartbleed or POODLE vulnerabilities, or had to support TLS 1.2 if working with some web APIs such as SalesForce. This is a good idea, but if the server and client cannot mutually agree on a supported cipher, hash, and protocol version the connection will fail. If the certificate is trusted and has the correct domain, this would be the next thing to check.

If you’ve never heard of this before, you can secure your IIS servers using a tool like IISCrypto. Go do it now, this post will wait.

Note that the .NET HTTP client with framework versions prior to 4.6.2 defaults to only supporting TLS up to 1.1. Many modern security scripts will disable all TLS protocol versions except for 1.2, which will cause HTTP requests from clients with earlier versions of the .NET framework installed to fail.

SSL Client Certificate Negotiation

Hopefully now you have a decent idea of how server certificates work. But xConnect also uses client certificates. A client certificate enables mutual authentication. With only a server certificate, the client must decide to trust the server but the server has no way to know if it should trust the client. Enter client certificates.

A client certificate is essentially the opposite of the server certificate. When using a client certificate, the negotiation works similarly to the server certificate, except that when the server sends the ServerHello (#3 above) it requests a client certificate in addition to sending its public key. The client then sends the public key of its client certificate back to the server - and then the server decides whether it should trust the client certificate.

If the client certificate is not trusted, it is rejected. The rules for validating a client certificate are up to the server and do not necessarily follow the same validation rules as a server certificate on the client. In the case of xConnect:

  • The domain/subject on the client certificate does not seem to matter to xConnect
  • The trusting of the certificate is done using the thumbprint of the certificate (a hash of the certficate which uniquely identifies it). Note that the thumbprint will change when an expired certificate is renewed, so you will need to reconfigure xConnect after renewing a client certificate so that it trusts the newer thumbprint.
  • The xConnect server must trust the issuer of the client certificate

What can go wrong with client certificate negotiation

There are a lot of things that can go wrong with the client certificate, moreso than the server certificate. When troubleshooting, make your first step the Sitecore XP logs - they generally have some basic information about a bad client cert.

If you’re receiving HTTP 4xx responses

Chances are your client certificate validation failed. This could mean:

  • The client certificate is not installed on both the Sitecore XP server and the xConnect server (the xConnect server would only need the public key)
  • The client certificate is not considered trusted on the xConnect server
  • The certificate thumbprint configured in the xConnect server’s App_Config\ConnectionStrings.config is missing or incorrect. Note that the thumbprint must be all uppercase with no spaces or colons. If copied from certificate manager, an unprintable character might prefix the thumbprint - check for a hidden character there.
  • The certificate location configured in the xConnect server’s App_Config\ConnectionStrings.config is incorrect. Normally the certificate should be stored in local machine certificates and have a connection string similar to StoreName=My;StoreLocation=LocalMachine;FindType=FindByThumbprint;FindValue=THUMBPRINTVALUE.

“The certificate was not found”

This indicates one of two things:

  • The thumbprint is incorrect in the Sitecore XP server’s App_Config\ConnectionStrings.config file. Note that the thumbprint must be all uppercase with no spaces or colons. If copied from certificate manager, an unprintable character might prefix the thumbprint - check for a hidden character there.
  • The certificate location configured in the Sitecore XP server’s App_Config\ConnectionStrings.config is incorrect. Normally the certificate should be stored in local machine certificates and have a connection string similar to StoreName=My;StoreLocation=LocalMachine;FindType=FindByThumbprint;FindValue=THUMBPRINTVALUE.

System.Net.WebException: The request was aborted: Could not create SSL/TLS secure channel.

As long as the server certificate is valid, this message is most likely that the Sitecore XP server’s IIS app pool user account does not have read access to the client certificate’s private key. This access is needed so that the Sitecore XP server can encrypt communications using its client certificate.

To remedy this issue, open the local machine certificates (“Manage computer certificates” in a start menu search) on the Sitecore XP server. Find the client certificate (normally under Personal\Certificates). Right click it, choose All Tasks, then Manage Private Keys.... You should get a security assignment window like this:

Next, add your IIS app pool user to the ACLs and grant it Read permissions (as above). Remember if you’re using AppPoolIdentity (you should be, unless using a domain account for windows auth to SQL), you must select the account by choosing Local Computer as the search location, and enter IIS APPPOOL\MyAppPoolsName as the account name to find.

Still having issues? Well, you can also use the security audit log to find out which account is failing to get access, then add that account in the key ACLs above:

Local Development Tip

If you work at a Sitecore partner and will have multiple copies of Sitecore 9 running locally, this can cause issues if you issue a dedicated SSL server certificate to each site. This is because a given TCP port (e.g. 443, the default) can only have one SSL certificate bound to it. This precludes having multiple Sitecore 9 instances running together locally unless they share the same SSL certificate.

Wildcard certificates are perfect for this job. As long as you use the same top level suffix for all your dev sites (e.g. *.local.dev), you can use the same wildcard certificate for your server certificate for all dev sites. Note that IIS’ self-signed certificate generator will not create a wildcard certificate for you. You’ll have to use something else, like New-SelfSignedCertificate, to create one.

Important note: You must issue a wildcard for at least two segments of domain for it to be trusted. For example *.dev is bad, but *.local.dev is good.

Note that client certificates should be unique on each site, only the server certificate should be shared.

In the release version of Sitecore 9, you can also disable the requirement to use encryption with xConnect which can bypass a lot of debugging. Do not do this in production or else a herd of elephants will destroy you.

Advanced Debugging with Wireshark

It’s possible to watch the SSL negotiation at a TCP/IP level using a network monitor such as Wireshark. This can provide insights on why your setup is failing when error messages are less than optimal. For example I spent a couple days diagnosing what turned out to be private key security issues. I figured this out by using Wireshark and observing that the client was never sending its client certificate after the server requested it, and figuring out why that was.

To use Wireshark to watch SSL traffic, you’ll have to set it up to decrypt traffic. This guide walks you through setting up decryption on Windows with an exported private key.

If you’re tracing local dev traffic (e.g. from localhost to localhost, including using your machine’s DNS name) Wireshark will not capture that unless you install npcap instead of the default pcap packet capture software. Once npcap is installed, tell Wireshark to bind to the Npcap Loopback Adapter to see local traffic.

Here is a screenshot of the Wireshark capture where I diagnosed the client certificate security issue:

Good luck!

]]>
<p>Sitecore 9 introduces the new xConnect server to the ecosystem. xConnect is an abstracted service layer that Sitecore uses for all its an
Where to find Sitecore documentation https://kamsar.net/index.php/2017/10/Where-to-find-Sitecore-documentation/ 2017-10-22T19:29:29.000Z 2021-07-26T23:19:02.511Z

The land of Sitecore documentation is becoming a bit crowded these days. While at Symposium, I heard some people say they didn’t know how to keep up on new documentation - so here’s what I know. No doubt I missed some resources too, but these are the ones I usually use and follow.

Official Docs

Sitecore Doc Site

This is the main place to find documentation for Sitecore, as well as Sitecore modules. It has a handy RSS feed of updated articles you should subscribe to.

Unfortunately the RSS feed is not entirely complete due to documentation microsites being proxied in under the main doc site (for example Commerce and the v9 Scaling Guide). These statically generated sites generally do not provide their own RSS feeds, and are thus harder to track updates to.

Sitecore Knowledge Base

The Sitecore KB lists known issues, support resolutions, security bulletins, and other support information. Like the main doc site, it has its own RSS feed of updated articles that is absolutely worth subscribing to.

Sitecore Helix Docs

Sitecore’s official architecture guidance has its own website. Unfortunately, no RSS feed of updates.

Sitecore JSS Docs

The JavaScript Services module has its own separate documentation site. Unfortunately, no RSS feed of updates.

Sitecore Dev Site

Where to go to actually download Sitecore releases and official modules such as SXA and PXM. There’s no RSS feed of new releases and updates, unfortunately.

Community-run Docs

Sitecore Blog Feed

A Sitecore-run blog aggregator that serves up a fresh helping of most major Sitecore blogs. Worth subscribing to via its RSS feed.

Sitecore StackExchange

A community-driven Q&A site that’s part of StackExchange. If you have a question about Sitecore, there are many highly active members who are happy to help here.

Sitecore Slack

Slack is a group messaging/discussion tool. The Sitecore Community Slack group has over 2,700 Sitecore developers with very active participation. If you do Sitecore, you should be here.

Unofficial Sitecore Training

Community run unofficial training videos that cover development practices that are commonly used, but not covered in official Sitecore training. More opinionated, influenced heavily by real-world implementation experiences.

Sitecore Community Docs

Unofficial documentation. Not updated that often any longer but still some good information, especially the article on config patching.

Sitecore Powershell Extensions Docs

The SPE documentation is so complete that it’s worth mentioning even though it’s for a single Sitecore module.

]]>
<img src="/index.php/2017/10/Where-to-find-Sitecore-documentation/keep-calm-and-rtfm.jpg" class=""> <p>The land of Sitecore documentation i
Unicorn 4 Released https://kamsar.net/index.php/2017/06/Unicorn-4-Released/ 2017-06-22T19:57:43.000Z 2021-07-26T23:19:02.506Z unicorn

I’m happy to announce the final release of Unicorn 4.0! Unicorn 4 comes with significant performance and developer experience improvements, along with bug fixes. Unicorn 4 is available from NuGet or GitHub.

Project Dilithium and Performance

Unicorn 4 is faster - a lot faster. Check out these benchmarks:

performance benchmarks

The speed increase is due to optimized caching routines, as well as the Dilithium batch processors. Dilithium is an optional feature that is off by default: because of its newness, it’s still experimental. I’m using it in production though. Give it a try - it can always be turned off without hurting anything.

For more detail into how Unicorn 4 is faster, and what Dilithium does, check out this detailed blog post.

Improved Modular Configuration

Unicorn 4 features a refactored configuration system that is designed to support Sitecore Helix projects with an improved configuration experience. The new config system is completely backwards-compatible, but now enables configuration inheritance, configuration variables, and configuration extension so that modular projects can encode their conventions (e.g. paths to include, physical paths) into one base config and all the module configs can extend it.

This drastically reduces the verbosity of the module configurations, and improves their maintainability by allowing conventions to be DRY. Here’s a very simple example of a base conventions configuration:

1
2
3
4
5
6
<configuration name="Habitat.Feature.Base" abstract="true">
<targetDataStore physicalRootPath="$(sourceFolder)\$(layer)\$(module)\serialization" />
<predicate>
<include name="$(layer).$(module).Templates" database="master" path="/sitecore/templates/$(layer)/$(module)" />
</predicate>
</configuration>

And here’s a module configuration that extends it:

1
2
3
4
5
6
<configuration 
name="Feature.News"
extends="Habitat.Feature.Base">
<!-- automatically stores items at $(sourceFolder)\Feature\News\serialization -->
<!-- automatically includes /sitecore/templates/Feature/News -->
</configuration>

There’s a lot more that you can do with the configuration enhancements in Unicorn 4 too. For additional details, read this extensive blog post.

Sitecore PowerShell Extensions Support

unicorn + spe

Just about anything you can do with Unicorn can now be automated using Sitecore PowerShell Extensions in Unicorn 4. You can now run Unicorn SPE cmdlets to…

Console Output Scaling

The Unicorn console has received a serious upgrade in Unicorn 4. If you’ve ever run a sync that changed a large number of items from the Unicorn Control Panel, you may have noticed the browser slow to a crawl and the sync seem to almost stop. The console that underpins Unicorn 3 and earlier started to choke at around 500 lines.

No longer! Unicorn 4’s upgraded console has spit out 100,000 lines without a hitch, and it should scale beyond that.

The automated tool console (PowerShell API) has also received an upgrade. Previously the tool console buffered all the output of a sync before sending it back. This caused problems in certain environments, namely Azure, where TCP connections that don’t send any data for more than four minutes are terminated. This would cause long-running syncs in Azure to die unexpectedly.

In Unicorn 4 the automated tool console emits data in a stream just like the control panel console. There’s also a heartbeat timer where if no new console entries are made for 30 seconds, a . will be sent to make sure the connection is kept active.

The streaming tool console also requires updating your Unicorn.psm1 file - not only will you get defense against TCP timeouts, you’ll also be able to see the sync occur in real time using the PSAPI just like you would from the control panel. No more waiting until it’s done to see how things are going :)

Predicates can Exclude by Template ID or Name

Unicorn 4 can now exclude items from a configuration by template ID, thanks to Alan Płócieniak. See also Alan’s original post on the technique.

1
2
3
<include name="Template ID" database="master" path="/sitecore/allowed">
<exclude templateId="{3B4F2B85-778D-44F3-9B2D-BEFF1F3575E6}" />
</include>

You can also exclude items by a regular expression of their name. This enables scenarios such as wanting to include all templates, but exclude all __Standard values items.

1
2
3
<include name="Name pattern" database="master" path="/sitecore/namepattern">
<exclude namePattern="^__Standard values$" />
</include>

The complete grammar for predicates is always in the predicate test config.

Breaking Changes

Unicorn 4’s breaking changes do not break any common use-cases of Unicorn, but review them to see if they affect you.

  • The __Originator field is now serialized by default. This enables proper tracking of the origin of items instantiated from branch templates.
  • Multithreaded sync support has been removed due to Sitecore bugs preventing realistic use. This was disabled by default already. Dilithium is faster than multithread sync ever was.
  • The Rainbow UseLegacyAttributeFormatting (formats items in Unicorn < 3.1 format) setting has been removed. The new format is now always used. This has always been off by default.
  • Rainbow FieldComparers are no longer activated using the Sitecore Factory, so they only support parameterless constructors (this would only affect custom comparers; the stock ones have always been parameterless)
  • Due to the console upgrades, a new Unicorn.psm1 is required if you are using Unicorn’s PowerShell API. This file also now ships in the NuGet package, so you can be sure you’re getting the right version for your Unicorn.

Bug Fixes

  • Transparent sync misc fixes (to renaming, saving display names, instantiating branch templates into TpSync areas)
  • Renaming an item and changing fields on it in one operation (only possible with Sitecore API) now no longer loses the additional field data in the serialized item
  • Improved output and logging, clarified messaging, improved developer experience
  • Content editor warnings now handle items in more than one configuration correctly
  • Control panel now displays which paths are invalid when initial serialization is blocked due to invalid include paths
  • The required password length for user serialization is now configurable, should you really really want to use b
  • Using Fast Query while any Transparent Sync configuration is active will now log a warning to the Sitecore logs (fast query cannot find transparent sync items, so items may not be returned as expected). This can be disabled if your logs start to fill with spam and you understand the issue.
  • PowerShell API now defaults to not write secrets to the console (debug is off by default) for secure-by-default-ness
  • Fixed a background exception that could occur when modifying serialized items on disk in rare cases, which could cause the app pool to terminate #222
  • The Rainbow data cache now correctly invalidates if an item is moved or renamed on disk after being added to the cache
  • Choosing many configurations to sync will no longer push the sync button off the page #232
  • The default console logging level for interactive syncing has been changed from Info to Debug, since there is no longer a scaling issue with the console output. This provides better insights into what has been changed on items without needing to see the Sitecore logs

Upgrading

If you’re coming from classical Unicorn 3.1 or later, upgrading is actually really simple: just upgrade your NuGet package. Unicorn 4 changes nothing about storage or formatting (except that the __Originator field is no longer ignored by default), so all existing serialized items are compatible.

If you’re invoking Unicorn via its remote PowerShell API, make sure to upgrade your Unicorn.psm1 to the Unicorn 4 version to ensure correct error handling with the streaming console.

Thanks

Thank you to the community members who contributed code and bug reports to this release.

Address

]]>
<p><img src="https://kamsar.net/nuget/unicorn/logo.png" alt="unicorn"></p> <p>I’m happy to announce the final release of Unicorn 4.0! Unicor
Simplifying Contact Facets with C# 6 https://kamsar.net/index.php/2017/06/Simplifying-Contact-Facets-with-C-6/ 2017-06-02T17:04:08.000Z 2021-07-26T23:19:02.506Z Contact Facets allow you to persist information about visitors into the Sitecore xDB. We’re not going to get into the theory behind them in this post; for that go read Pete Navarra’s great blog post that summarizes current practices and how to add facets.

Today we’re going to discuss how to syntactically improve the declaration of a contact facet class using syntaxes available in C# 6.0 (VS 2015) and C# 7.0 (VS 2017). It’s important to note that the C# version is decoupled from the .NET framework version: the C# 7.0 compiler is perfectly capable of emitting C# 7 syntax to a .NET 4.5-targeted assembly, for instance. So you can use these modern language features as long as you’ve got the right version of MSBuild or Visual Studio :)

Here’s the example Pete uses in his post, which follows other examples out there as well:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
using System;
using Sitecore.Analytics.Model.Framework;

namespace SitecoreHacker.Sandbox.Facets
{
[Serializable]
public class MarketingData: Facet, IMarketingData
{

private const string CUSTOMER_ID = "CustomerId";
private const string SEGEMENT = "Segment"; // sic :p

#region Properties
public string CustomerId
{
get { return GetAttribute<string>(CUSTOMER_ID); }
set { SetAttribute(CUSTOMER_ID, value); }
}

public string Segment
{
get { return GetAttribute<string>(SEGEMENT); }
set { SetAttribute(SEGEMENT, value); }
}
#endregion

public MarketingData()
{
EnsureAttribute<string>(CUSTOMER_ID);
EnsureAttribute<string>(SEGEMENT);
}
}
}

As you can see, the facet API requires string keys for the facet values - in this case stored as const string - to get and set them. Further, as Pete notes:

I found out the hard way that the constants defined, the value must equal the actual name of the class property for the same attribute.

Well in C# 6 (VS 2015), there’s a syntax for that. The nameof statement allows you to get the string name of a variable or property. This essentially hands off the management and maintenance of the const value to the compiler, instead of the developer.

So we can clean up this example by using nameof instead of constants - and get as a bonus refactoring support and compile-time validation of the names:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
using System;
using Sitecore.Analytics.Model.Framework;

namespace Elephant.Sandbox.Facets
{
[Serializable]
public class MarketingData: Facet, IMarketingData
{
public string CustomerId
{
get { return GetAttribute<string>(nameof(CustomerId)); }
set { SetAttribute(nameof(CustomerId), value); }
}

public string Segment
{
get { return GetAttribute<string>(nameof(Segment)); }
set { SetAttribute(nameof(Segment), value); }
}

public MarketingData()
{
EnsureAttribute<string>(nameof(CustomerId));
EnsureAttribute<string>(nameof(Segment));
}
}
}

Finally if you have C# 7.0 (VS 2017), you can also utilize expression bodied members to further clean up the property syntax:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
using System;
using Sitecore.Analytics.Model.Framework;

namespace Rhino.Sandbox.Facets
{
[Serializable]
public class MarketingData: Facet, IMarketingData
{
public string CustomerId
{
// expression bodied members turn the single-expression get
// into a lamdba-style syntax, removing the need for braces
get => GetAttribute<string>(nameof(CustomerId));
set => SetAttribute(nameof(CustomerId), value);
}

public string Segment
{
get => GetAttribute<string>(nameof(Segment));
set => SetAttribute(nameof(Segment), value);
}

public MarketingData()
{
EnsureAttribute<string>(nameof(CustomerId));
EnsureAttribute<string>(nameof(Segment));
}
}
}

So there - now go forth and put your data in the xDB :)

]]>
<p>Contact Facets allow you to persist information about visitors into the Sitecore xDB. We’re not going to get into the theory behind them
Unicorn 4 Part III: Configuration Enhancements https://kamsar.net/index.php/2017/02/Unicorn-4-Part-III-Configuration-Enhancements/ 2017-02-28T03:03:31.000Z 2021-07-26T23:19:02.505Z TL;DR: Unicorn 4 prerelease is on NuGet right now!

Now that that’s out of the way, let’s talk about another new Unicorn 4 feature: modular architecture friendly configurations.

When Habitat first launched, I was mildly incredulous at the amount of duplication in its Unicorn configurations. Tons of tiny modules, all of which shared similar but not identical configurations (such as custom root folders) was not really a consideration when multiple configurations were originally conceived. Fast forward to today, and that’s a major use case that is more difficult than it needs to be.

Here’s an example of a Habitat Unicorn configuration:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
<sitecore>
<unicorn>
<configurations>
<configuration
name="Feature.News"
description="Feature News"
dependencies="Foundation.Serialization,Foundation.Indexing"
patch:after="configuration[@name='Foundation.Serialization']">
<targetDataStore physicalRootPath="$(sourceFolder)\feature\news\serialization"
type="Rainbow.Storage.SerializationFileSystemDataStore, Rainbow" useDataCache="false"
singleInstance="true" />
<predicate type="Unicorn.Predicates.SerializationPresetPredicate, Unicorn" singleInstance="true">
<include name="Feature.News.Templates" database="master" path="/sitecore/templates/Feature/News" />
<include name="Feature.News.Renderings" database="master" path="/sitecore/layout/renderings/Feature/News" />
<include name="Feature.News.Media" database="master" path="/sitecore/media library/Feature/News" />
</predicate>
<roleDataStore type="Unicorn.Roles.Data.FilesystemRoleDataStore, Unicorn.Roles" physicalRootPath="$(sourceFolder)\feature\news\serialization\Feature.News.Roles" singleInstance="true"/>
<rolePredicate type="Unicorn.Roles.RolePredicates.ConfigurationRolePredicate, Unicorn.Roles" singleInstance="true">
<include domain="modules" pattern="^Feature News .*$" />
</rolePredicate>
</configuration>
</configurations>
</unicorn>
</sitecore>
</configuration>

It’s long and it has a ton of boilerplate that is either identical in every module, or else defined by system conventions (e.g. physicalRootPath). We don’t need to be that verbose when using Unicorn 4. When we setup a modular, convention-based system using Unicorn 4 we can start by using abstract configurations to define the conventions of our system:

1
2
3
4
5
6
7
8
9
10
11
12
<configuration name="Habitat.Feature.Base" abstract="true">
<targetDataStore physicalRootPath="$(sourceFolder)\$(layer)\$(module)\serialization" />
<predicate>
<include name="$(layer).$(module).Templates" database="master" path="/sitecore/templates/$(layer)/$(module)" />
<include name="$(layer).$(module).Renderings" database="master" path="/sitecore/layout/renderings/$(layer).$(module)" />
<include name="$(layer).$(module).Media" database="master" path="/sitecore/media library/$(layer).$(module)" />
</predicate>
<roleDataStore type="Unicorn.Roles.Data.FilesystemRoleDataStore, Unicorn.Roles" physicalRootPath="$(sourceFolder)\$(layer)\$(module)\serialization\$(layer).$(module).Roles" singleInstance="true"/>
<rolePredicate type="Unicorn.Roles.RolePredicates.ConfigurationRolePredicate, Unicorn.Roles" singleInstance="true">
<include domain="modules" pattern="^$(layer) $(module) .*$" />
</rolePredicate>
</configuration>

This configuration defines a configuration that other configurations can extend. Because of its abstract-ness it is not a Unicorn configuration itself, only a template. Non-abstract configurations may also be extended.

This abstract configuration is also making use of Unicorn 4’s ability to do variable replacement in configurations. The $(layer) and $(module) variables are expanded in the extending configuration and are based on the convention of naming your configurations Layer.Module. You can also expand more than one config per module and use your own variables. Using our abstract Habitat.Feature.Base configuration above, the same Feature.News configuration we started with can now be expressed much more simply:

1
2
3
4
5
6
<configuration 
name="Feature.News"
description="Feature News"
dependencies="Foundation.Serialization,Foundation.Indexing"
extends="Habitat.Feature.Base">
</configuration>

Nice huh? But what if you want to extend or replace a dependency in the inherited configuration? You can do that, too - and using Unicorn 4’s element inheritance system you can also do it very cleanly. Unicorn configurations have always been architecturally a set of independent IoC containers. The <defaults> node in Unicorn.config sets up the defaults, and then each configuration’s nodes override and replace the defaults if they exist. This is how you can deploy only new items with the NewItemsOnlyEvaluator - you’re replacing the default evaluator with a different dependency implementation.

Unicorn 4 takes this a step further: with config inheritance, dependencies can be partially extended at an element level. You might have noticed this already in the Habitat.Feature.Base configuration, when we did this:

1
<targetDataStore physicalRootPath="$(sourceFolder)\$(layer)\$(module)\serialization" />

In Unicorn 3, this would have required a type attribute. In Unicorn 4, unless you specify a type attribute, any attributes you add either replace or add to the default (or inherited) implementation. So instead this kept the same default dependency definition and changed an attribute on it - the physicalRootPath.

If you do specify a type, nothing is inherited and it works like Unicorn 3. Thus existing configurations will also work without modification :)

But what about things that have more than just attributes, like the predicate‘s include nodes? You can append elements in the inherited configuration in that case. If we take our Habitat.Feature.Base configuration above and extend it like this:

1
2
3
<predicate>
<include name="Foo" database="master" path="/sitecore/Foo" />
</predicate>

The end result is effectively:

1
2
3
4
5
6
<predicate type="Unicorn.Predicates.SerializationPresetPredicate, Unicorn" singleInstance="true">
<include name="Feature.News.Templates" database="master" path="/sitecore/templates/Feature/News" />
<include name="Feature.News.Renderings" database="master" path="/sitecore/layout/renderings/Feature/News" />
<include name="Feature.News.Media" database="master" path="/sitecore/media library/Feature/News" />
<include name="Foo" database="master" path="/sitecore/Foo" />
</predicate>

You cannot remove inherited predicate nodes (or other dependencies that use children like fieldFilter), so plan accordingly: adding elements only.

And there you have it: with Unicorn 4 you can reasonably simply create serialization conventions for your modules and avoid configuration duplication - or if you’re not ready to go modular, you can at least enjoy not needing to have a type on most configuration nodes.

But Wait, There’s More 🐘: The Console No Longer Sucks

In the Control Panel…

The Unicorn console has also received a serious upgrade in Unicorn 4. If you’ve ever run a sync that changed a large number of items from the Unicorn Control Panel, you may have noticed the browser slow to a crawl and the sync seem to almost stop. The console that underpins Unicorn 3 and earlier started to choke at around 500 lines.

No longer! Unicorn 4’s console has spit out 100,000 lines without a hitch.

Automated Tools (PowerShell API)

The automated tool console has also received an upgrade. Previously the tool console buffered all the output of a sync before sending it back. This caused problems in certain environments, namely Azure, where TCP connections that don’t send any data for more than 4 minutes are terminated. This would cause any long-running syncs in Azure to die unexpectedly.

In Unicorn 4 the automated tool console emits data in a stream just like the control panel console. There’s also a heartbeat timer where if no new console entries are made for 30 seconds, a . will be sent to make sure the connection is kept active.

The streaming console also requires updating your Unicorn.psm1 file - not only will you get defense against TCP timeouts, you’ll also be able to see the sync occur in real-time using the PSAPI just like you would from the control panel. No more waiting until it’s done to see how things are going :)

Can I have it yet?

Absolutely. You can find Unicorn 4.0.0-pre03 on NuGet right now!

How stable is this?

More stable than you might think. Unicorn 4 is largely additions, fixes, and enhancements to the already stable codebase behind Unicorn 3. The core pieces have not changed very much, unless you enable Dilithium and that’s optional. The new config inheritance stuff has 97% code coverage. That’s not to say it’s bug free either. If you find bugs let me know and I’ll fix them :)

What about installing it?

Installation is just like Unicorn 3: Install the Unicorn NuGet package, and follow the directions in the README that will launch on installation to set up configuration(s).

NOTE: Dilithium ships disabled by default. If you want to enable it, make a copy of Unicorn.Dilithium.config.example and enable it.

What about upgrading to it?

If you’re coming from classical Unicorn 3.1 or later, upgrading is actually really simple: just upgrade your NuGet package. Unicorn 4 changes nothing about storage or formatting (except that the __Originator field is no longer ignored by default), so all existing serialized items are compatible.

Taking advantage of the config enhancements detailed above is also entirely optional: Unicorn 3 configurations are totally readable by Unicorn 4.

If you’re invoking Unicorn via its PowerShell API, make sure to upgrade your Unicorn.psm1 to the Unicorn 4 version to ensure correct error handling with the streaming console.

Have fun!

]]>
<p>TL;DR: <a href="https://www.nuget.org/packages/Unicorn.Core/4.0.0-pre03">Unicorn 4 prerelease</a> is on NuGet right now!</p> <p>Now that
Unicorn 4 Preview Part 2.5: Generating Unicorn Packages with SPE https://kamsar.net/index.php/2017/02/Unicorn-4-Preview-Part-2-5-Generating-Packages-with-SPE/ 2017-02-12T05:09:02.000Z 2021-07-26T23:19:02.504Z Last time we talked about how Sitecore PowerShell Extensions support was coming to Unicorn 4. This time, we’ve got a new cmdlet to share.

Over time, many people have asked if there was a way to generate Sitecore packages from Unicorn. The answer has always been no, for many good reasons: packages install slowly, cannot ignore specific fields, or process advanced exclusions like a Unicorn predicate can. This makes them much less safe (and much slower) for deployment purposes compared to a remotely invoked Sync using deployed serialized items.

But there is a great use case for generating packages from Unicorn: authoring modules. As a module author, a method is needed to track the items that belong to your module and also to reliably create Sitecore packages for distribution of your module which contain those items. Unicorn is a natural fit for simply tracking module items, but it has lacked the ability to automatically push updates to release packages like it can to serialized items. This unnecessarily complicates things and reduces release reliability. That’s bad.

So when Michael West and Adam Najmanowicz, the authors of Sitecore PowerShell Extensions, asked if there was a way we could export Unicorn configurations to packages my answer was absolutely.

SPE has long had packaging support built into it, and in fact SPE’s release packages are built using SPE. Unicorn packaging support is also implemented through SPE, and here’s how it works:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Create a new Sitecore Package (SPE cmdlet)
$pkg = New-Package

# Get the Unicorn Configuration(s) we want to package
$configs = Get-UnicornConfiguration "Foundation.*"

# Pipe the configs into New-UnicornItemSource
# to process them and add them to the package project
# (without -Project, this would emit the source object(s)
# which can be manually added with $pkg.Sources.Add())
$configs | New-UnicornItemSource -Project $pkg

# Export the package to a zip file on disk
Export-Package -Project $pkg -Path "C:\foo.zip"

And when you’re done with that, c:\foo.zip would contain a package that when installed will contain the entire contents of any Unicorn configuration matching Foundation.*.

New-UnicornItemSource also accepts parameters to specify package installation options, exactly like SPE’s New-ExplicitItemSource. This cmdlet is also very similar to how New-UnicornItemSource works: each item that is included in the configuration is added to the package as an explicit item source. Doing this also means that the exported package completely respects the Unicorn Predicate, including exclusions of child paths (note that if you specify -InstallMode Overwrite, excluded children may be deleted by the package).

Questions?

Are the packaged items pulled directly from the serialized items?

No, they are pulled from the Sitecore database because the Sitecore packaging APIs work in Items. So make sure to sync before you generate a package. Unless you’re using Transparent Sync in which case the items will already be up to date.

Should I use this to deploy my site?

No. As mentioned above, packages are a slower and more dangerous method to deploy item updates to your site.

Does this mean modules will start requiring Unicorn? 🐘

No. Unicorn would only be used in the development of the module, and the build process used to generate plain old Sitecore Packages for module releases. The module itself would need depend on neither Unicorn, Rainbow, or SPE.

]]>
<p><a href="/index.php/2017/02/Unicorn-4-Preview-Part-2-SPE-Support/">Last time</a> we talked about how Sitecore PowerShell Extensions suppo
Where's the comments? https://kamsar.net/index.php/2017/02/Where-s-the-comments/ 2017-02-07T03:20:50.000Z 2021-07-26T23:19:02.504Z Because of low utilization and the fact that Disqus is about to start spamming you with wonderful ads, I’ve decided to turn comments off on my blog.

That’s not to say you’re not allowed to comment, because you can tweet your comments or join Sitecore Community Slack and comment all day.

In other news the blog is now fully SSL enabled, courtesy of CloudFlare.

Happy hacking!

]]>
<p>Because of low utilization and the fact that Disqus is about to start <a href="http://www.daedtech.com/im-quitting-disqus/">spamming you
Unicorn 4 Preview, Part 2: SPE Support https://kamsar.net/index.php/2017/02/Unicorn-4-Preview-Part-2-SPE-Support/ 2017-02-06T23:16:03.000Z 2021-07-26T23:19:02.501Z Unicorn 4 will feature full support for Sitecore PowerShell Extensions (SPE) to perform Unicorn actions. If you’ve ever wanted deep programmatic control over Unicorn (for example “I want to sync Foundation.*”), or if you’ve got an existing deployment process that’s already using SPE Remoting to perform deployment tasks - this is for you.

To use Unicorn cmdlets in SPE, all that is necessary is to install the SPE package along with Unicorn. Unicorn 4 comes with configuration that remains quiescent until SPE is installed that will automatically enable the Unicorn cmdlets. In case that’s not clear enough: SPE is an optional addition and will not be required to use Unicorn 4.

So what can we do with Unicorn cmdlets for SPE?

Configurations

1
2
3
4
5
6
7
8
# Get one
Get-UnicornConfiguration "Foundation.Foo"

# Get by filter
Get-UnicornConfiguration "Foundation.*"

# Get all
Get-UnicornConfiguration

The result of Get-UnicornConfiguration is an array of IConfiguration objects, which you can spelunk (e.g. with their Name property) or pass to other cmdlets. Configurations are read only.

Syncing

Sync cmdlets make use of Write-Progress to provide a similar progress bar experience to the Control Panel, albeit a bit less responsive.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Sync one
Sync-UnicornConfiguration "Foundation.Foo"

# Sync multiple by name
Sync-UnicornConfiguration @("Foundation.Foo", "Foundation.Bar")

# Sync multiple from pipeline
Get-UnicornConfiguration "Foundation.*" | Sync-UnicornConfiguration

# Sync all, except transparent sync-enabled configurations
Get-UnicornConfiguration | Sync-UnicornConfiguration -SkipTransparent

# Optionally set log output level (Debug, Info, Warn, Error)
Sync-UnicornConfiguration -LogLevel Warn

For example:

Partial Syncing

Sometimes you want to only sync a portion of a configuration. You can do that with PowerShell using Sync-UnicornItem.

1
2
3
4
5
6
7
8
# Sync a single item (note: must be under Unicorn control)
Get-Item "/sitecore/content" | Sync-UnicornItem

# Sync multiple single items (note: all must be under Unicorn control)
Get-ChildItem "/sitecore/content" | Sync-UnicornItem

# Sync an entire item tree, show only warnings and errors
Get-Item "/sitecore/content" | Sync-UnicornItem -Recurse -LogLevel Warn

Reserializing

The cmdlet to reserialize is called Export-UnicornConfiguration because Reserialize is not an approved verb for a cmdlet :)

1
2
3
4
5
6
7
8
# Reserialize one
Export-UnicornConfiguration "Foundation.Foo"

# Reserialize multiple by name
Export-UnicornConfiguration @("Foundation.Foo", "Foundation.Bar")

# Reserialize from pipeline
Get-UnicornConfiguration "Foundation.*" | Export-UnicornConfiguration

Partial Reserializing

Sometimes you want to only reserialize a portion of a configuration. You can do that with PowerShell using Export-UnicornItem.

1
2
3
4
5
6
7
8
# Reserialize a single item (note: must be under Unicorn control)
Get-Item "/sitecore/content" | Export-UnicornItem

# Reserialize multiple single items (note: all must be under Unicorn control)
Get-ChildItem "/sitecore/content" | Export-UnicornItem

# Reserialize an entire item tree
Get-Item "/sitecore/content" | Export-UnicornItem -Recurse

Converting to Raw YAML

You can also dump out the raw YAML for an item - or items. The output of ConvertTo-RainbowYaml is either a string or array of strings depending on how many items were passed to it. Note that unless -Raw is specified, the default field formatters and excluded fields Unicorn ships with are used. These are non-customizable and do not follow Unicorn defaults if changed.

This capability enables casual use of YAML serialization without having to use Unicorn or set up a configuration. It’s not a good solution for general purpose synchronization though simply because the nuances of storing trees of items in files are many. Very many. But I’m curious what uses people will find for this :)

1
2
3
4
5
6
7
8
9
10
# Convert an item to YAML format (always uses default excludes and field formatters)
Get-Item "/sitecore/content" | ConvertTo-RainbowYaml

# Convert many items to YAML strings
Get-ChildItem "/sitecore/content" | ConvertTo-RainbowYaml

# Disable all field formats and field filtering
# (e.g. disable XML pretty printing,
# and don't ignore the Revision and Modified fields, etc)
Get-Item "/sitecore/content" | ConvertTo-RainbowYaml -Raw

Converting from Raw YAML

In Rainbow the IItemData interface is the internal unit of an Item. The ConvertFrom-RainbowYaml cmdlet converts raw YAML string(s) into IItemData which you can then spelunk as objects or deserialize as needed.

1
2
3
4
5
6
# Get IItemDatas from YAML variable
$rawYaml | ConvertFrom-RainbowYaml

# Get IItemData and disable all field filters
# (use this if you ran ConvertTo-RainbowYaml with -Raw)
$yaml | ConvertFrom-RainbowYaml -Raw

Deserialization

To deserialize items, use Import-RainbowItem which takes IItemData items in and deserializes them into the Sitecore database. No comparison is done before deserialization, which makes this a bit slower than a full Unicorn sync.

As a shorthand Import-RainbowItem also accepts YAML strings, however as IItemData can represent any sort of item, it is not limited only to deserializing YAML-sourced items.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# Deserialize IItemDatas from ConvertFrom-RainbowYaml
$rawYaml | ConvertFrom-RainbowYaml | Import-RainbowItem

# Deserialize raw YAML from pipeline into Sitecore
# Shortcut bypassing ConvertFrom-RainbowYaml
$yaml | Import-RainbowItem

# Deserialize and disable all field filters
# (use this if you ran ConvertTo-RainbowYaml with -Raw)
$yaml | Import-RainbowItem -Raw

# Deserialize multiple at once
$yamlStringArray | Import-RainbowItem

# Complete example that does nothing but eat CPU
Get-ChildItem "/sitecore/content" | ConvertTo-RainbowYaml | Import-RainbowItem

Questions?

Does this mean the existing PowerShell Remote API is obsolete?

No. The existing PowerShell API uses Windows PowerShell to provide remote syncing capability and does not require installing Sitecore PowerShell. They serve different parallel purposes, and both are here to stay.

You have no gifs or memes in this post. Is something wrong?

you mad?

Can I have a beta yet?

I’ll be releasing a beta once I finish the features I have planned. Yep, there’s at least one more ;)

]]>
<p>Unicorn 4 will feature full support for <a href="https://www.gitbook.com/book/sitecorepowershell/sitecore-powershell-extensions/details">