Use Vercel KV store in Astro
Let's build together a simple page view counter for every Astro blog page we have, using the new KV store by Vercel.
Join me in my experiment to build an Astro blog where we host blog content on GitHub, but without using Git to push and commit Markdown as regular files, but instead we'll be using GitHub Discussions as CMS for our Markdown content and then eventually using GitHub API to retrieve the content.
So I’ve been trying to finish my blog for some time now, finally was able to. I wanted it to be up and running in a quick time and also, to be easily maintainable.
I chose Astro as it’s the best fit for the job, with its great Markdown support, but one thing I didn’t like was having both content and codebase on the same repository..
I come across some articles talking about making GitHub as CMS, saw some boilerplates around this idea and I wanted to have a go, but it wasn’t easy..
One great feature I liked about astro is their ContentCollection
, where you create and manage your own types of collection, collection here means a folder with some Markdown files, more on that here.
But the problem with Astro now, is that it does not support fetching remote Markdown files and at the same time, benefit from ContentCollection
, we need to have the Markdown files locally in order to be considered a ContentCollection
..
So the TLDR of this article, is to fetch Markdown from a remote place, which in our case from GitHub (perfect way to host your Markdown content, more on that later on) and then save the fetched Markdown content as files locally, all of this needs to happen on build time..
Currently I found two ways to build CMS on GitHub, either using issues or discussions, what I was familiar with was using issues, I saw it being used on a lot for blogs, but I also found GitHub discussions to be a better choice, first we’ll talk about the setup and then the fetch script using GitHub API.
So here are the things I did to setup the blog in GitHub discussion:
Now we need to know the format of the Markdown files we want to save locally, so we create a new ContentCollection
:
import { defineCollection, z } from "astro:content";
const blogCollection = defineCollection({
schema: z.object({
title: z.string(),
subtitle: z.string().nullish(),
lede: z.string(),
tags: z.string().array(),
created_at: z.date(),
updated_at: z.date(),
draft: z.boolean(),
picked: z.boolean(),
featured: z.boolean(),
}),
});
export const collections = {
blog: blogCollection,
};
Now we need to understand how to map discussion settings to the Markdown format above, so we have the following field and their mapping in GitHub Discussions:
A few points worth mentioning here:
as for the “frontmatter” fields, here’s an example:
---
subtitle: this is a subtitle
lede: this is a long lede text, bla bla bla..
---
content here
Now for the fetching part, I had to add and extra build step for Astro, and here’s how it goes:
{
...
"scripts": {
"dev": "astro dev",
"fetch": "node scripts/fetch-discussions.mjs",
"start": "astro dev",
"build": "pnpm run fetch && astro build",
"preview": "astro preview",
"astro": "astro"
},
...
}
For the script, I created a new file under /scripts/fetch-discussions.mjs
:
import fs from "fs";
import path from "path";
async function fetchDiscussions() {
// Use GitHub API to fetch discussions
}
fetchDiscussions().then((discussions) =>
discussions.map((discussion, idx) => {
// Transform discussion data and Markdown to a compatible Markdown for Astro
const content = "...";
// Slugify post title and use it as the filename
const filename = "...";
// Save new formatted Markdown to a file under "src/content/blog"
const dist = path.join(process.cwd(), "src/content/blog");
if (!fs.existsSync(dist)) {
fs.mkdirSync(dist);
}
fs.writeFile(`${dist}/${filename}`, content, (err) => {
if (err) throw err;
console.log(`Saved discussion ${idx + 1} to ${filename}`);
});
})
);
Now to elaborate further on the fetchDiscussions()
function:
import { graphql } from "@octokit/graphql";
import { loadEnv } from "vite";
const gql = String.raw;
const owner = "OWNER";
const repo = "REPO";
async function fetchDiscussions() {
const { GITHUB_TOKEN } = loadEnv("production", process.cwd(), "");
const graphqlAuth = graphql.defaults({
headers: {
authorization: `token ${GITHUB_TOKEN}`,
},
});
// Retrieve pinned discussions ids
const resPinned = await graphqlAuth(
gql`
query {
repository(owner: "<owner>", name: "<repo>") {
pinnedDiscussions(last: 100) {
nodes {
discussion {
id
}
}
}
}
}
`
.replace("<owner>", owner)
.replace("<repo>", repo)
);
const pinnedDiscussionsIds = resPinned.repository.pinnedDiscussions.nodes.map(
(node) => node.discussion.id
);
// Retrieve all discussions (Drafts & Release categories)
const res = await graphqlAuth(
gql`
query {
search(
query: "repo:<owner>/<repo> is:open category:Drafts category:Release"
type: DISCUSSION
last: 100
) {
edges {
node {
... on Discussion {
id
title
body
category {
slug
}
labels(last: 100) {
nodes {
name
}
}
createdAt
updatedAt
url
}
}
}
discussionCount
}
}
`
.replace("<owner>", owner)
.replace("<repo>", repo)
);
return res.search.edges
.map((edge) => edge.node)
.map((discussion) => ({
...discussion,
category: discussion.category.slug,
labels: discussion.labels.nodes.map((node) => node.name),
isPinned: pinnedDiscussionsIds.includes(discussion.id),
}));
}
Now the final part:
import matter from "gray-matter";
function slugify(title) {
return title
.toLowerCase()
.replace(/[^\w\s-]/g, "")
.replace(/[\s]+/g, "-")
.replace(/[-]+/g, "-")
.trim();
}
fetchDiscussions().then((discussions) =>
discussions.map((discussion, idx) => {
// Extract frontmatter data from discussion Markdown
const { content: body, data: frontmatter } = matter(discussion.body);
// Construct post data object
const post = {
title: discussion.title,
...frontmatter,
tags: `[${discussion.labels
.filter((label) => !label.includes("@"))
.map((label) => `"${label}"`)
.join(", ")}]`,
created_at: discussion.createdAt,
updated_at: discussion.updatedAt,
draft: discussion.category === "drafts",
picked: discussion.isPinned,
featured: discussion.labels.some((label) => label === "@featured"),
};
// Create a slug from the title of the post (this is still problematic, more on that later on)
const slug = slugify(discussion.title);
const filename = `${slug}.md`; // TODO: handle posts with the same title
// Construct the post Markdown content
const content = `---
${Object.keys(post)
.map((key) => `${key}: ${post[key]}`)
.join("\n")}
---
${body}
`.replace(/\r/g, "");
// Save new formatted Markdown to a file under "src/content/blog"
const dist = path.join(process.cwd(), "src/content/blog");
if (!fs.existsSync(dist)) {
fs.mkdirSync(dist);
}
fs.writeFile(`${dist}/${filename}`, content, (err) => {
if (err) throw err;
console.log(`Saved discussion ${idx + 1} to ${filename}`);
});
})
);
Now, when we run the command below, our discussions will turn into a blog posts, they will be saved under /src/content/blog/*
pnpm run fetch
Finally, we need to change the build command for our hosting provider, in our case, nothing comes close to “Vercel”, we head down to the settings page of our project, then to the “Build & Development Settings”, and we override the default build command:
Now the cherry on top, is to have our posts automatically built & deployment, so I used GitHub Actions to trigger Vercel deployment manually on each discussion change event, so I created two workflow files, one for preview & production deployment on vercel, this is an example of the preview.yml:
name: Vercel Deloyment Dispatcher (Preview)
on:
discussion:
types:
[
created,
edited,
deleted,
pinned,
unpinned,
labeled,
unlabeled,
category_changed,
]
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Check if changes are done from either Drafts/Release discussion categories
env:
VERCEL_DEPLOY_TOKEN: ${{ secrets.VERCEL_PREVIEW_DEPLOY_TOKEN }}
VERCEL_PROJECT_ID: "prj_XXXXXXXXXXXXXXXXXXXXXXXXXXXX"
run: |
discussion_category="${{ github.event.discussion.category.slug }}"
if [ "$discussion_category" = "drafts" -o "$discussion_category" = "release" ]; then
echo "Matched Draft/Release discussion category, triggering a deployment.."
curl -X POST "https://api.vercel.com/v1/integrations/deploy/$VERCEL_PROJECT_ID/$VERCEL_DEPLOY_TOKEN"
echo "Deployment triggered successfully!"
else
echo "Out of scope discussion category, failing silently.."
fi
New, we’ll talk about some problems we have with this approach,
As we see in the fetch code above, we’re creating slug from the title of the discussion, but the problem is that, the title is changeable, so does the slug, which is very bad for SEO, we want the post to always have the same slug and the same URL.
Problem here is that the publish date of the post, is not changeable, it’s always tied to the created_at
date of the discussion, but it’s not huge problem, I can live with that.
With GitHub Actions, I was not able to limit discussions event triggers to a certain discussions categories (Drafts & Release, in my case), meaning for every discussion’s change, it will will trigger a new run, but because the runs does not exceedingly take a lot of time (12s tops), it’s alright.
Here are some things that I’m planning to work on next:
If you made it this far, thank you for reading, I hope you find it useful,
Stay tuned for more in the future, Peace ✌️