content: add blog posts with new naming convention

- Add 10 blog posts covering various technical topics
- Topics include AWS, Go, Emacs, AI engineering, Forgejo, and MLOps
- All posts follow YYYY-MM-DD-slug.md naming convention

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
This commit is contained in:
Daisuke Nakahara 2026-02-16 21:49:12 +09:00
parent ff2571d948
commit 0f97dd23ad
10 changed files with 502 additions and 0 deletions

View file

@ -0,0 +1,11 @@
---
title: 'Hello'
pubDate: 2025-05-09
author: 'Nakahara Daisuke'
tags: ["introduction"]
---
# Hello!
Hello, my name is Daisuke.
This is my first blog post. Nice to meet you.

View file

@ -0,0 +1,14 @@
---
title: 'I Have Finally Published My Blog Website'
pubDate: 2025-05-15
author: 'Nakahara Daisuke'
tags: []
---
I have finally published my blog website.
Its infrastructure relies on AWS, and I use Microsoft Copilot to manage the hosting.
This tool is particularly useful because it allows me to inquire about the contents of tags opened in Microsoft Edge.
Thank you.

View file

@ -0,0 +1,14 @@
---
title: 'Why I Write Docstrings'
pubDate: 2025-05-16
author: 'Nakahara Daisuke'
tags: ["programming"]
---
I try to write docstrings as much as possible for two main reasons.
First, they help reviewers understand the roles of functions and classes.
Second, I believe that clear, detailed docstrings can enable AI tools to generate code more effectively.
Although I sometimes make mistakes in my docstrings,
I think that even imperfect documentation is better than none.
Moreover, both reviewers and AI tools can help identify and correct these errors.

View file

@ -0,0 +1,102 @@
---
title: 'AWS CLI Commands for Managing CloudFormation Stacks'
pubDate: 2026-01-01
author: 'Nakahara Daisuke'
tags: ["AWS"]
---
This article is a collection of AWS CLI commands used while updating the CloudFormation stacks that support this blog.
Each command is grouped by its purpose, focusing on practical workflows for managing CloudFormation stacks safely and explicitly.
### Assume an IAM Role Temporarily
Use the following command to assume an IAM role temporarily and output the credentials as a JSON file.
```bash
aws sts assume-role \
--role-arn arn:aws:iam::000000000000:role/MyRole \
--role-session-name my-session-name \
--profile my-profile \
> /tmp/creds.json
```
Set environment variables based on the generated JSON credentials file.
```bash
export AWS_ACCESS_KEY_ID=$(jq -r '.Credentials.AccessKeyId' /tmp/creds.json)
export AWS_SECRET_ACCESS_KEY=$(jq -r '.Credentials.SecretAccessKey' /tmp/creds.json)
export AWS_SESSION_TOKEN=$(jq -r '.Credentials.SessionToken' /tmp/creds.json)
```
### Create a New CloudFormation Stack
Use this command to create a new CloudFormation stack.
```bash
aws cloudformation create-stack \
--stack-name my-stack-name \
--template-body file://my-template.yaml \
--capabilities CAPABILITY_NAMED_IAM \
--region ap-northeast-1
```
### Update an Existing Stack with Parameters
Use this command to update an existing stack while passing parameters.
```bash
aws cloudformation update-stack \
--stack-name my-stack-name \
--template-body file://my-template.yaml \
--capabilities CAPABILITY_NAMED_IAM \
--region ap-northeast-1 \
--parameters ParameterKey=KeyName,ParameterValue="Value"
```
### Manually Start a Stack Rollback
Use this command to manually continue a stack rollback.
```bash
aws cloudformation continue-update-rollback \
--stack-name my-stack-name \
--region ap-northeast-1
```
### Wait for Stack Rollback Completion
Use this command to wait until the rollback process is complete.
```bash
aws cloudformation wait stack-rollback-complete \
--stack-name my-stack-name \
--region ap-northeast-1
```
### Create a Change Set to Import Existing Resources
Use this command to create a change set for importing existing (non-IaC) resources into a CloudFormation stack.
```bash
aws cloudformation create-change-set \
--stack-name my-stack-name \
--change-set-name my-change-set-name \
--change-set-type IMPORT \
--template-body file://my-template.yaml \
--resources-to-import file://my-import-definition.json \
--region ap-northeast-1
```
### Check the Status of a Change Set
Use this command to inspect the status and details of a change set.
```bash
aws cloudformation describe-change-set \
--stack-name my-stack-name \
--change-set-name my-change-set-name \
--region ap-northeast-1
```
### Execute a Change Set
Use this command to execute the prepared change set.
```bash
aws cloudformation execute-change-set \
--stack-name my-stack-name \
--change-set-name my-change-set-name \
--region ap-northeast-1
```

View file

@ -0,0 +1,64 @@
---
title: 'Fixing GitHub Copilot CLI System Vault Error with systemd'
pubDate: 2026-01-02
author: 'Nakahara Daisuke'
tags: ["GitHub", "AI"]
---
When installing GitHub Copilot CLI, you may encounter the error message: `The system vault (keychain, keyring, password manager, etc.) is not available. You may need to install one.` The solution was documented in [this issue](https://github.com/github/copilot-cli/issues/49).
This article explains how to resolve this issue using `systemd`.
### Install via Apt
```bash
$ sudo apt update
```
```bash
$ sudo apt install -y \
gnome-keyring \
libsecret-1-0 \
libsecret-tools \
seahorse
```
### Verify systemd is running
```bash
$ systemctl is-system-running
```
### Check if the user service is enabled
```bash
$ systemctl --user status gnome-keyring-daemon
○ gnome-keyring-daemon.service - GNOME Keyring daemon
Loaded: loaded (/usr/lib/systemd/user/gnome-keyring-daemon.service; enabled; preset: enabled)
Active: inactive (dead)
TriggeredBy: ○ gnome-keyring-daemon.socket
```
```bash
$ systemctl --user enable gnome-keyring-daemon
```
```bash
$ systemctl --user start gnome-keyring-daemon
```
```bash
$ systemctl --user status gnome-keyring-daemon
● gnome-keyring-daemon.service - GNOME Keyring daemon
Loaded: loaded (/usr/lib/systemd/user/gnome-keyring-daemon.service; enabled; preset: enabled)
Active: active (running) since Fri 2026-01-02 08:36:46 JST; 3s ago
Invocation: d830e5563e974edc9265650d11dfa086
TriggeredBy: ● gnome-keyring-daemon.socket
Main PID: 5222 (gnome-keyring-d)
Tasks: 5 (limit: 9422)
Memory: 3.4M (peak: 3.9M)
CPU: 37ms
CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/gnome-keyring-daemon.service
└─5222 /usr/bin/gnome-keyring-daemon --foreground --components=pkcs11,secrets --control-directory=/run/user/1000/keyring
Jan 02 08:36:46 hostname systemd[649]: Started gnome-keyring-daemon.service - GNOME Keyring daemon.
Jan 02 08:36:46 hostname gnome-keyring-daemon[5222]: GNOME_KEYRING_CONTROL=/run/user/1000/keyring
Jan 02 08:36:46 hostname gnome-keyring-daemon[5222]: another secret service is running
Jan 02 08:36:46 hostname gnome-keyring-d[5222]: another secret service is running
```
> **Note**: This article was translated from Japanese to English and reviewed with the assistance of AI (GitHub Copilot).

View file

@ -0,0 +1,50 @@
---
title: 'The Difference Between %v and %w in Gos fmt.Errorf'
pubDate: 2026-01-12
author: 'Nakahara Daisuke'
tags: ["Go"]
---
I learned the difference between `%v` and `%w` through reviewing Go code with the help of an AI generative model.
## `%v`: String representation of a value
```go
err := someFunction()
return fmt.Errorf("failed to do something: %v", err)
```
This approach simply embeds the error as a string.
As a result, information about the original error is lost, which means it may not be usable with `errors.Is()` or `errors.As()` for error inspection.
## `%w`: Wrapping an error
```go
err := someFunction()
return fmt.Errorf("failed to do something: %w", err)
```
`%w` is a special format specifier used with `fmt.Errorf()` to wrap an error. This allows the original error to be retrieved using `errors.Unwrap()`.
The %w verb for wrapping errors was introduced in [Go 1.13](https://go.dev/doc/go1.13).
```go
orig := errors.Unwrap(err)
```
In addition, wrapped errors can be examined using `errors.Is()` and `errors.As()`.
```go
if errors.Is(err, io.EOF) {
// Handle EOF
}
var pathErr *os.PathError
if errors.As(err, &pathErr) {
// Type assertion succeeded
}
```
The official documentation is available [here](https://pkg.go.dev/fmt#Errorf).
---
> **Note**:Note The review and translation were assisted by an AI generative model. The author is responsible for the final content.

View file

@ -0,0 +1,50 @@
---
title: 'How to Open Multiple vterm Instances in Emacs Using Buffer Renaming'
pubDate: 2026-01-18
author: 'Nakahara Daisuke'
tags: ["Emacs", "vterm"]
---
## Introduction
If you're using [vterm](https://github.com/akermu/emacs-libvterm) in Emacs, you've probably encountered situations where you need multiple terminal instances running simultaneously.
This article shows you how to leverage buffer renaming to open multiple vterm instances in Emacs.
## The Problem: Can You Only Open One vterm?
By default, when you run `M-x vterm` to start a vterm session and then execute `M-x vterm` again, it simply switches to the existing vterm buffer instead of opening a new terminal.
This behavior leads many users to believe that "you can only run one vterm instance at a time."
## The Solution: Rename Your Buffers
The trick is simple: **by renaming the existing vterm buffer, you can create additional vterm instances**.
### Step-by-Step Guide
1. Launch your first vterm with `M-x vterm`
2. Execute `C-x x r` or `M-x rename-buffer`
3. Enter a new buffer name (e.g., `*vterm-dev*`, `*vterm-git*`, etc.)
4. Run `M-x vterm` again to open a new vterm instance
Repeat these steps as many times as needed to create multiple vterm buffers.
## Real-World Use Cases
Here's how I use this workflow in my daily development:
- **vterm-copilot**: Interactive development with GitHub Copilot CLI
- **vterm-main**: General command execution and file operations
By managing multiple vterm instances, you can complete all your work without ever leaving Emacs.
## Conclusion
With the rise of AI-powered CLI tools, terminal-based workflows are becoming increasingly important. For Emacs users, mastering vterm is more valuable than ever.
I hope this article helps you boost your development productivity in Emacs!
---
> **Note**: The review and translation were assisted by an AI generative model. The author is responsible for the final content.

View file

@ -0,0 +1,29 @@
---
title: 'A Compass for AI Application Development: Reading "AI Engineering"'
pubDate: 2026-01-25
author: 'Nakahara Daisuke'
tags: ["Book", "AI"]
---
This is a brief book review of "AI Engineering" (Japanese edition) by Chip Huyen, published by O'Reilly Japan.
The book provides a detailed explanation of the essential and typical processes for **AI Engineering**—building applications using AI models.
Topics include "foundation model use cases," "evaluation methods," "evaluation criteria," "prompt engineering," "RAG," and "agents."
Through this book, I learned about AI evaluation processes in an environment where foundation models are frequently updated and consistency is difficult to maintain.
The author states, "I am convinced that evaluation is the biggest bottleneck in AI adoption."
Rather than simply deploying AI, the book proposes evaluating AI applications by categorizing them into evaluation criteria: "domain-specific capabilities," "generation capabilities," "instruction-following capabilities," and "cost and latency."
Since AI can make mistakes, without proper evaluation, it becomes difficult to differentiate AI applications from others, and there is a risk of user churn due to declining trust.
Therefore, a book that teaches evaluation operations was highly beneficial.
Regarding prompt engineering, the book recommends following the "Keep It Simple" principle and, in the initial stages of prompt creation, starting by writing prompts yourself without relying on tools or AI models.
Since I often had AI models write prompts for me, I felt motivated to develop my own skills following the book's advice, especially in those early stages.
Given the extensive content of this book, it was difficult to understand everything from just one reading.
However, because it is well-organized, the book serves as a dictionary-like reference that can be consulted when stuck during AI application development.
---
> **Note**: The review and translation were assisted by an AI generative model. The author is responsible for the final content.

View file

@ -0,0 +1,132 @@
---
title: 'Operating Self-Hosted Forgejo via CLI: A forgejo-cli Guide'
pubDate: 2026-02-01
author: 'Nakahara Daisuke'
tags: ["Forgejo"]
---
# Introduction
I integrated [forgejo-contrib/forgejo-cli](https://codeberg.org/forgejo-contrib/forgejo-cli) to make it easier for AI coding agents to interact with my self-hosted Forgejo instance. forgejo-cli is similar to [`gh`](https://cli.github.com/), the official CLI tool for GitHub.
# Installation
I installed it on a Linux environment using Nix's home-manager. The Nix 25.11 package repository includes forgejo-cli Version 0.3.0.
```nix
{
...
outputs = inputs@{ nixpkgs-old, flake-parts, ... }:
let
mkHome = system: homeDirectory:
inputs.home-manager.lib.homeManagerConfiguration {
pkgs = import inputs.nixpkgs { inherit system; };
modules = [
({ pkgs, ... }: {
home.username = "username";
home.homeDirectory = homeDirectory;
home.stateVersion = "25.11";
home.packages = with pkgs; [
forgejo-cli
];
})
];
};
in
...
}
```
forgejo-cli is launched with the `fj` command:
```bash
$ fj version
fj v0.3.0
```
# Generating an Access Token
Generate a token from Forgejo's frontend at Settings > Applications > Access tokens > Generate new token. I configured the permissions as follows:
- read:notification
- read:organization
- write:package
- write:issue
- write:repository
- write:user
# Logging In with a Token
I stored the generated token in [gnome-keyring](https://gitlab.gnome.org/GNOME/gnome-keyring) first:
```bash
$ echo -n "MY_FORGEJO_PAT" | secret-tool store --label="Forgejo PAT" service forgejo user username@git.example.com
```
Register the key using the `auth add-key` subcommand. Note that you must specify the host with `-H git.example.com`; otherwise, it defaults to `github.com`:
```bash
$ echo -n "$(secret-tool lookup service forgejo user username@git.example.com)" | fj -H git.example.com auth add-key username
```
```bash
$ fj auth list
username@git.example.com
```
```bash
$ fj -H git.example.com whoami
currently signed in to username@git.example.com
```
# Managing Issues and Pull Requests
The v0.3.0 `fj` provides `issue` and `pr` commands for managing issues and pull requests.
## Issues
Create an issue with `fj issue create`. Note that the `-H` flag for specifying the host must come before the subcommand:
```bash
$ fj -H git.example.com issue create --repo <REPO> [TITLE] --body <BODY>
```
To search for issues in a specific repository, use `fj issue search`. The `--repo` option is required:
```bash
$ fj issue search --repo <REPO>
```
To view issue details, use `fj issue view <ISSUE> body`. Replace `<ISSUE>` with the issue number. You can also close issues using the `-w` option to include a comment. Interestingly, this command works correctly without requiring the repository name:
```bash
$ fj -H git.example.com issue close <ISSUE> -w <WITH_MSG>
```
## Pull Requests
Create a pull request with `fj pr create`. This creates a pull request requesting to merge the `<HEAD>` branch into the `<BASE>` branch:
```bash
$ fj pr create --repo <REPO> --base <BASE> --head <HEAD> [TITLE] --body <BODY>
```
List pull requests using `fj pr search`. You can filter by state using the `-s` option with either `open` or `closed`:
```bash
$ fj pr search -r <REPO> -s <STATE>
```
View pull request details (title and body) with `fj pr view [ID] body`.
You can access help for any command using the `--help` flag.
# Logging Out
Logout from a host with `fj auth logout`:
```bash
$ fj auth list
username@git.example.com
$ fj auth logout git.example.com
signed out of username@git.example.com
$ fj auth list
No logins.
```
# Conclusion
By introducing `fj` to AI coding agents, I was able to automate issue-based coding and pull request creation. Tools like forgejo-cli that offer CLI operations are particularly valuable for AI agent automation, and I welcome their development.
---
> **Note**: The review and translation were assisted by an AI generative model. The author is responsible for the final content.

View file

@ -0,0 +1,36 @@
---
title: 'Book Review: Implementing MLOps in the Enterprise - The Importance of Operations Pipeline Design'
pubDate: 2026-02-16
author: 'Nakahara Daisuke'
tags: ["Book", "MLOps"]
---
This is a brief book review of "Implementing MLOps in the Enterprise: A Production-First Approach" (Japanese edition) by Yaron Haviv and Noah Gift, published by O'Reilly Japan.
I read this book because I was working on building a cloud-based infrastructure for regularly generating predictions from machine learning models, and I wanted to learn about MLOps.
Through my current project, I realized that in addition to building highly accurate models, constructing an infrastructure that balances long-term stability with cost reduction was a significant challenge.
MLOps refers to a systematic practical approach that encompasses the entire process of designing, building, and operating the efficient deployment of ML models into production environments.
MLOps consists of four main components:
- Data collection and preparation
- Model development and training
- ML service deployment
- Continuous feedback and monitoring
Similar to what I had felt through my project, MLOps is also defined as having the goal not of building models, but of creating automated ML pipelines that can accept inputs, produce high-quality models, and deploy them into application pipelines.
The most impactful learning for me was "start with designing continuous operations pipelines first, rather than model building."
This resonated because in my current project, I had adopted the incorrect sequence of advancing model building first, then starting pipeline construction on the cloud after accuracy validation was complete.
By starting with operations pipeline design first, proper abstraction can be achieved, making it easier to reduce dependency on individuals and accelerate growth.
One of the most interesting chapters in the book is "Chapter 10: Implementing MLOps with Rust."
The authors' thinking is reflected in this chapter: "If Rust improves operational performance, why not use it?"
The authors argue that Rust is the most performant and energy-efficient language, and thanks to AI coding tools, it has become much easier to implement than C or C++.
Reading this chapter, I began to want to learn Rust.
At the same time, I also became interested in whether it would be possible to implement MLOps with Fortran, the first language I learned and which is widely used for numerical computation.
This book is highly recommended for engineers involved in machine learning projects.
---
> **Note**: The review and translation were assisted by an AI generative model. The author is responsible for the final content.