<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[// Ewal]]></title><description><![CDATA[I'm a father and husband, a software developer, and a general geek.  I ramble about web development, ai, homelabs, and smart home tech.

More [about me](/about)]]></description><link>https://ewal.dev</link><generator>RSS for Node</generator><lastBuildDate>Sat, 11 Apr 2026 11:29:35 GMT</lastBuildDate><atom:link href="https://ewal.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[New Auth, Legacy Data, New Options]]></title><description><![CDATA[Well, it’s been just shy of a week since the launch of the new version of TrendWeight. Early feedback was mostly positive and many small bugs were squashed in the first few days.
There were also a couple larger improvements that just went live that a...]]></description><link>https://ewal.dev/new-auth-legacy-data-new-options</link><guid isPermaLink="true">https://ewal.dev/new-auth-legacy-data-new-options</guid><category><![CDATA[trendweight]]></category><dc:creator><![CDATA[Erv Walter]]></dc:creator><pubDate>Tue, 29 Jul 2025 03:03:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1753758202721/0e9a3854-52bd-4268-a8d5-3404400ae3a6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Well, it’s been just shy of a week since the launch of the new version of TrendWeight. Early feedback was mostly positive and many small bugs were squashed in the first few days.</p>
<p>There were also a couple larger improvements that just went live that are worth explaining.</p>
<p>As always, if you run into any issues, don’t hesitate to reach out to <a target="_blank" href="mailto:erv@ewal.net">erv@ewal.net</a></p>
<h3 id="heading-new-authentication-provider">New Authentication Provider</h3>
<p>The previous authentication system was from <a target="_blank" href="https://supabase.com/">Supabase</a>. It <em>mostly</em> worked. Except for two problems:</p>
<ul>
<li><p>People kept getting kicked out and forced to log in again. This was driving me personally crazy.</p>
</li>
<li><p>A small number of people couldn’t log in at all for some unknown reason</p>
</li>
</ul>
<p>It turns out that maybe Supabase’s authenticaiton system is just not quite mature yet. So I replaced it. TrendWeight now uses <a target="_blank" href="https://clerk.com/">Clerk</a> which has a better reputation and a more traditional approach. This should be mostly transparent to those that used the system in the first week (although you will have to log in <em>one more time</em> per device). The new system will let you stay logged in as long as you visit the site once every 90 days or so.</p>
<h3 id="heading-legacy-data">Legacy Data</h3>
<p>It turns out that a few users had a lot of historical data in the old site that doesn’t actually live in Withings/Fitbit anymore for whatever reason. So while the new site did re-download all their weight data from Withings/Fitbit, they were seeing less than they were used to. The good news is that the old site’s database still exists.</p>
<p>There is now a feature for those that had accounts at the old site that will pull over whatever data existed there and that data will be merged with what Withings and/or Fitbit have going forward. If for whatever reason, you decide you don’t want that old data in your charts, you can disable the legacy data in your settings.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753757790468/95b7d63e-37df-42e7-955c-2e2f46e6d6f8.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-start-date-improvements">Start Date Improvements</h3>
<p>The final addition is an improvement to Start Dates. The new onboarding UI didn’t prompt people to pick a start date and most people probably never bothered to visit the Settings and so weren’t aware of it. So now, new accounts get asked up front if they want to pick a start date (users of the old site had their previous Start Date copied over automatically).</p>
<p>Additionally, the new site behaves a bit differently than the old site in a way that wasn’t what everyone wanted. The old site only showed you data from your Start Date onward. However the new site gets everything that Withings and/or Fitbit have for you. That’s often useful, but in some circumstances it you may want to focus only on weight data from your Start Date forward, so there is a new setting that lets you hide data from before your start date, which will make the weight chart behave like the old site did:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753757763971/181c3930-96af-4f11-8d28-7476f155395f.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[TrendWeight v2 Has Launched!]]></title><description><![CDATA[After more than a decade sitting mostly unchanged, TrendWeight has gotten a fresh update. While the app looks and works mostly the same way you're used to, everything under the hood has been modernized.
What You Need to Know
Logging In
You'll need to...]]></description><link>https://ewal.dev/trendweight-v2-has-launched</link><guid isPermaLink="true">https://ewal.dev/trendweight-v2-has-launched</guid><category><![CDATA[trendweight]]></category><dc:creator><![CDATA[Erv Walter]]></dc:creator><pubDate>Wed, 23 Jul 2025 21:01:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1753209739996/982274c2-ca7e-45bf-813c-64c531e29caf.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>After more than a decade sitting mostly unchanged, TrendWeight has gotten a fresh update. While the app looks and works mostly the same way you're used to, everything under the hood has been modernized.</p>
<h2 id="heading-what-you-need-to-know">What You Need to Know</h2>
<h3 id="heading-logging-in">Logging In</h3>
<p>You'll need to log in again using your email address. When you log in with the same email you used before, your existing account and all your data will automatically transfer over.</p>
<p>Don't remember which email you used? No problem—just create a new account and connect it to your Withings or Fitbit account. TrendWeight will load all your historical data from your scale.</p>
<p>Also note, the old site used to let you login with a username only (not an email). That isn’t possible anymore. In the interest of improving security, the new TrendWeight doesn’t use passwords and relies on one-time login links via email or using Google, Microsoft, or Apple logins.</p>
<h2 id="heading-whats-new">What's New</h2>
<h3 id="heading-connect-multiple-scales">Connect Multiple Scales</h3>
<p>You can now connect both Withings AND Fitbit to the same account. If you switch scale brands, there's no need to worry about losing historical data—just connect both and TrendWeight will combine everything into one continuous chart.</p>
<h3 id="heading-better-mobile-experience">Better Mobile Experience</h3>
<p>The weight charts on phones now show more than just the last 4 weeks. You can finally see your longer-term progress without switching to a computer.</p>
<h3 id="heading-easy-data-export">Easy Data Export</h3>
<p>Want to analyze your data in Excel or another app? There's now a simple way to download all your weight data from the settings page.</p>
<h2 id="heading-whats-gone">What's Gone</h2>
<p>The chart <em>image</em> sharing feature has been removed since it was rarely used. Though you <em>can</em> still share a link to your dashboard with others—you'll find your sharing URL in the settings.</p>
<h2 id="heading-technical-note">Technical Note</h2>
<p>For those interested, TrendWeight is now open source. You can find the code on <a target="_blank" href="http://github.com/ervwalter/trendweight">GitHub</a> or read about the saga at <a target="_blank" href="https://ewal.dev/series/trendweight">Rebuilding TrendWeight</a>.</p>
<h2 id="heading-questions-or-issueshttpsewaldevseriestrendweight"><a target="_blank" href="https://ewal.dev/series/trendweight">Questions or Issues?</a></h2>
<p>If you notice anything not working quite right, please let me know at <a target="_blank" href="mailto:erv@ewal.net">erv@ewal.net</a>.</p>
<p>Thanks for being a TrendWeight user!</p>
]]></content:encoded></item><item><title><![CDATA[Rebooting TrendWeight (Again)]]></title><description><![CDATA[A while ago, I shared that I was going to rewrite TrendWeight from scratch (see Why Rebuild?). And then there was three years of radio silence on the project. The rewrite stalled at 70%—most of the interesting architecture was done, leaving authentic...]]></description><link>https://ewal.dev/rebooting-trendweight-again</link><guid isPermaLink="true">https://ewal.dev/rebooting-trendweight-again</guid><category><![CDATA[claude-code]]></category><category><![CDATA[vite]]></category><category><![CDATA[Tailwind CSS]]></category><category><![CDATA[asp.net core]]></category><dc:creator><![CDATA[Erv Walter]]></dc:creator><pubDate>Wed, 23 Jul 2025 00:00:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1753205941286/d8f93fc6-98b9-46c9-82a0-bd790ad41a01.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A while ago, I shared that I was going to rewrite TrendWeight from scratch (see <a target="_blank" href="https://ewal.dev/trendweight-why-rebuild">Why Rebuild?</a>). And then there was three years of radio silence on the project. The rewrite stalled at 70%—most of the interesting architecture was done, leaving authentication flows and other tedious tasks.</p>
<p>What unstuck this project? AI coding tools, specifically Claude Code.</p>
<p>AI coding tools make tedious tasks easier. Plus, playing with new AI coding assistants is genuinely fun—they're like new toys. Sure, they're not perfect and sometimes generate questionable code, but they transform grunt work into something more engaging. The project stopped feeling like a chore.</p>
<p>I decided to not just finish the in-progress Next.js rewrite as is, and instead I decided to adjust the tech stack (again). My 2021 perspective on Next.js was overly optimistic. It's solid for content sites, but for interactive applications like TrendWeight, the server-side complexity adds up quickly. I also fell out of love with Chakra UI, and I wanted to move to Supabase instead of Firebase.</p>
<p>Current stack:</p>
<ul>
<li><p><strong>Frontend:</strong> Vite (stable, fast, minimal configuration)</p>
</li>
<li><p><strong>Backend:</strong> ASP.NET Core (Vite SPAs need a backend, and I know C#/.NET well)</p>
</li>
<li><p><strong>Styling:</strong> Tailwind (utility classes over opinionated components)</p>
</li>
<li><p><strong>Authentication/Database:</strong> Supabase (self-hostable, Postgres compatible)</p>
</li>
</ul>
<p>Claude Code handled the implementation well, particularly in dealing with grunt work:</p>
<ul>
<li><p>The entire authentication stack (social logins, email magic link)</p>
</li>
<li><p>Migrating from Next.js to Vite + C#</p>
</li>
<li><p>Migrating from Chakra UI to Tailwind</p>
</li>
</ul>
<p>Of course, AI coding tools still make a bunch of mistakes, and TrendWeight is complicated enough that you can’t really vibe code and get away with it. I definitely had to pay close attention to what Claude was doing (and in not a few places had to go back and clean up questionable choices after the fact), but I still enjoyed the process.</p>
<p>The rewrite is almost done. Just testing and polish remain, primarily around user migration UX. If things go smoothly, you can expect a follow-up post in the next couple weeks announcing the launch of the new site, so stay tuned!</p>
]]></content:encoded></item><item><title><![CDATA[AI as Observer: Chronicling Tabletop RPGs]]></title><description><![CDATA[Last night at the gaming table…

They ascended cautiously, weapons ready. In a chamber on the upper floor, they found her - an emaciated figure kneeling within a circle of power. On either side stood skeletal guardians, each bearing six arms laden wi...]]></description><link>https://ewal.dev/ai-chronicles-unraveling-the-mysteries-of-the-sands-of-yore</link><guid isPermaLink="true">https://ewal.dev/ai-chronicles-unraveling-the-mysteries-of-the-sands-of-yore</guid><category><![CDATA[AI]]></category><category><![CDATA[rpg]]></category><category><![CDATA[audio]]></category><category><![CDATA[Pathfinder]]></category><dc:creator><![CDATA[Erv Walter]]></dc:creator><pubDate>Sat, 14 Dec 2024 17:12:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1734196135263/6fa3a122-f599-4346-94ac-c612b5a1f44d.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last night at the gaming table…</p>
<blockquote>
<p>They ascended cautiously, weapons ready. In a chamber on the upper floor, they found her - an emaciated figure kneeling within a circle of power. On either side stood skeletal guardians, each bearing six arms laden with ancient weapons. They turned toward the party with terrible purpose as the woman raised her hollow eyes.</p>
<p>"You've made it here," she rasped. "Perhaps for good or ill, you now share my fate. If we are to survive this place, it will take all of us."</p>
</blockquote>
<p>I’m in a gaming group that plays a weekly in-person Pathfinder 2e game. Tabletop roleplaying games like Pathfinder 2e thrive on collaboration and creativity, but keeping track of every twist, turn, and NPC encounter can feel like a chore. In my group, no one is really interested in taking detailed notes, even though we all have laptops at the table.</p>
<p>I'm a tech nerd who likes to play with new tools, so I decided to see if AI could help.</p>
<p>For our current mini-campaign, <em>Sands of Yore</em>, I’ve adopted a workflow that uses audio recordings, transcription tools, and AI language models to generate session notes and weave an ongoing narrative. In this post, I’ll walk you through how I set this up, from the hardware to the software to the AI processing. It’s changed how we keep track of everything without disrupting the play at the table.</p>
<p>At the end of the post, you'll find links to the prompts and other tools I use, as well as the full campaign narrative as it exists today.</p>
<h3 id="heading-the-setup">The Setup</h3>
<h4 id="heading-hardware-recording-the-session"><strong>Hardware: Recording the Session</strong></h4>
<p>I started out simple: just recording our sessions with the built-in mic on my MacBook Pro. It worked, but not well. Audio quality was sketchy, especially for players sitting farther from the laptop which led to sketchy transcripts. Over time, I upgraded to two flat condenser microphones designed for conference tables (Shure CVB-B/O). These are zip-tied to the ceiling in the dedicated game room we play in, keeping the table clear while capturing great audio. The mics connect to a TASCAM Portacapture X6 audio interface that plugs into the MacBook via USB-C. This audio hardware is overkill, but if you have seen my other posts, you’ll probably not be surprised.</p>
<h4 id="heading-transcription-turning-audio-into-text"><strong>Transcription: Turning Audio into Text</strong></h4>
<p>I’ve tried several transcription tools and am still iterating on this process:</p>
<ul>
<li><p><strong>Whisper</strong>: Open-source and free, Whisper processes audio locally. It’s decent but doesn’t have speaker recognition and occasionally has weird issues, like looping errors in the transcript. It also requires several manual steps to process the audio after each session.</p>
</li>
<li><p><strong>Apple Notes</strong>: The new Apple Intelligence transcription let's you add an audio recording to a Note and it will generate a transcript from this audio when you stop recording.  It's on par with Whisper. It’s local and free (as long as you’re in the Apple ecosystem), but it makes my laptop grind to a halt for several minutes when processing audio from a four-hour session and doesn’t identify speakers either.</p>
</li>
<li><p><strong>Otter AI:</strong> This cloud-based tool was the best by far for transcript quality. It’s accurate, identifies speakers, and even transcribes live during the session. The downside? It’s not free, and you’re uploading your audio to a third-party service, which raises privacy concerns.</p>
</li>
</ul>
<p>Interestingly, even the lower-quality transcripts worked fine for our next step, but Otter made things easier by avoiding having to manually process audio files. I think the speaker identification (diarization) might help the LLM understand the transcript so I want to find a way to get that without relying on a cloud too. I’m looking at automating Whisper with a separate speaker diarization tool via n8n to get similar results without the privacy trade-off and with more automation. That’s still a work in progress.</p>
<h3 id="heading-the-ai">The AI</h3>
<p>Once I have a transcript, this is where the AI comes in. I’ve tried both ChatGPT and Claude for this process, and Claude is the current winner for our workflow, both because Sonnet 3.5 seems to be better at this kind of text processing but also because of the power of Claude "Projects". Here’s what we do:</p>
<ol>
<li><p><strong>Create the Transcript:</strong> Process the session audio into a text file and save it to our shared Google Drive.</p>
</li>
<li><p><strong>Summarize the Session:</strong> Using a generic summary generation prompt, I paste the transcript into Claude and let it work its magic. If something important gets missed, I ask for tweaks to ensure nothing critical is left out. The prompt is generic, and campaign specific details come from reference documents that are part of the project (see below)</p>
</li>
<li><p><strong>Generate the Narrative:</strong> A second generic prompt turns the same transcript into a more creative narrative, fleshing out character moments, world details, and story arcs. Again, I’ll ask Claude to refine it if needed.</p>
</li>
<li><p><strong>Save and Share:</strong> Both the summary and narrative go into shared Google Docs. Everyone in the group can review them before the next session, and Claude uses them to keep continuity week-to-week.</p>
</li>
</ol>
<p>The Claude project is tied to our Google Drive, so the LLM can reference background details to improve accuracy and depth:</p>
<ul>
<li><p>World-building notes from our GM that detail the setting, deities, and lore.</p>
</li>
<li><p>A reference doc with PCs and key NPCs, including descriptions, motivations, and relationships.</p>
</li>
<li><p>Previous session summaries and the ongoing narrative.</p>
</li>
</ul>
<p>Storing everything in a shared Google Drive (including prompts) helps keep it organized and easy for everyone to access. And when you add a document to a Claude Project from Google Drive, Claude monitors it for changes over time and so the AI always have the latest content even as people make changes outside of Claude.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>I'm very happy with this workflow.</p>
<ul>
<li><p><strong>Imperfect is fine:</strong> Transcripts don’t need to be perfect. No one reads the raw transcripts, so their quality only needs to be good enough for the AI to work with. The AI can handle a lot of cleanup and still generate great results.</p>
</li>
<li><p><strong>Context limits:</strong> Claude’s project context capacity grows with each session. And 4-hour transcripts are big.  If it becomes an issue, we might need to split the project into separate narratives and summaries or trim the background details.</p>
</li>
</ul>
<p>We’re using this system for a short campaign of 5-8 sessions, but it’s been so effective that we’ll definitely bring it into our next long-term campaign. Having AI handle note-taking and storytelling not only saves time but also adds a richer narrative layer to the game. It’s like having an extra player at the table whose sole job is to record and enhance the story.</p>
<p>If you’re struggling with keeping notes or want to try something new with your Pathfinder or Dungeons &amp; Dragons game, give it a shot. AI may be able to take the responsibility for tracking your sessions—and you’ll have some amazing stories to show for it.</p>
<h3 id="heading-references">References</h3>
<ul>
<li><p>These are the two prompts I use: <a target="_blank" href="https://docs.google.com/document/d/e/2PACX-1vSjQzUghCS6TKlHrPPzUarXtnyEb7NSM4VUzdQT3r_RpXOJM_LwWNp6VkzRIB3jRmlOu2inCpitc4Ak/pub">Summaries and Narratives</a></p>
</li>
<li><p>Our full current campaign narrative: <a target="_blank" href="https://docs.google.com/document/d/e/2PACX-1vSg8M6aunlLQBFbf8Xo7pV2q83f83rXe7CnP2jaHHxAjmRCIdagZfQBysFphgGLeAd47fFHNVqi9qmx/pub">The Sands of Yore, an Adventure</a></p>
</li>
<li><p>Our session summaries: <a target="_blank" href="https://docs.google.com/document/d/e/2PACX-1vSZPomiwKn1BDs4xThcM9gS_Ec4D6DguHM7l3eHxjNmG4pHaj7o1tWp6iX8hDUdTLrEhR2bb0VffEPF/pub">Session Summaries</a></p>
</li>
<li><p>Audio hardware used is two of these microphones: <a target="_blank" href="https://www.amazon.com/gp/product/B00A361UMS">Shure CVB-B/O</a>, and this audio interface: <a target="_blank" href="https://www.amazon.com/Portacapture-Portable-Recorder-Podcast-Podcasting/dp/B0BT571JKW">TASCAM Portacapture X6</a></p>
</li>
<li><p>Audio transcription apps tried: <a target="_blank" href="https://apps.apple.com/us/app/whisper-transcription/id1668083311?mt=12">Whisper Transcription</a>, <a target="_blank" href="https://apps.apple.com/us/app/notes/id1110145109">Apple Notes</a>, <a target="_blank" href="https://otter.ai/">Otter.ai</a></p>
</li>
<li><p>AI platform: <a target="_blank" href="https://claude.ai/">Claude</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[CephFS: Migrating Files Between Pools]]></title><description><![CDATA[When I started with CephFS, I didn't have a good plan for how I wanted subfolders to map to different Ceph pools. I had different kinds of data in the file system, so I knew I wanted some of it to be on fast NVMe storage with simple replication, and ...]]></description><link>https://ewal.dev/cephfs-migrating-files-between-pools</link><guid isPermaLink="true">https://ewal.dev/cephfs-migrating-files-between-pools</guid><category><![CDATA[ceph]]></category><category><![CDATA[cephfs]]></category><category><![CDATA[Homelab]]></category><dc:creator><![CDATA[Erv Walter]]></dc:creator><pubDate>Fri, 24 May 2024 04:52:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1716524795029/109c9406-d8db-40d2-9008-90b6dfd30c5c.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When I started with CephFS, I didn't have a good plan for how I wanted subfolders to map to different Ceph pools. I had different kinds of data in the file system, so I knew I wanted some of it to be on fast NVMe storage with simple replication, and other bulk files to be on higher capacity SATA SSD storage with erasure coding to reduce the storage required.</p>
<p>CephFS has the concept of <a target="_blank" href="https://docs.ceph.com/en/reef/cephfs/file-layouts/">File Layouts</a>. Essentially, these are extended attributes on files or directories that give CephFS hints about how it should handle file storage. One of the available fields indicates which data pool should be used to store a file. If you add that field to a directory, it applies to any files within that directory tree (unless an individual file or subfolder overrides the field).</p>
<p>So I did this when I started out, separating my terabytes of media files from my smaller amount of more performance-sensitive container data files.</p>
<p>But I didn't get it right. Oops.</p>
<p>After I had populated the file system with 20+ TB of files, I changed my mind about which pools I wanted to use. It's easy to change those extended attributes anytime you want, but that doesn't affect existing files—only new ones.</p>
<p>To get existing files to move to the newly assigned pools, you essentially have to recreate them so that CephFS sees them as new files and puts them in the right place.</p>
<p>I wanted to migrate files from their current pool to the newly assigned pool, and I didn't want to do it by hand.</p>
<p>After some searching for solutions, I found a piece of Python <a target="_blank" href="https://git.sr.ht/~pjjw/cephfs-layout-tool">code</a> written by Peter Woodman that sort of did this, but it didn't work exactly how I wanted. However, it was good inspiration.</p>
<p>I'm not usually a Python programmer, so I turned to ChatGPT, and it helped me create a similar standalone script that systematically processes a directory tree to move existing files to a new pool.</p>
<p>The code I ultimately ended up with and used is here: <a target="_blank" href="https://gist.github.com/ervwalter/5ff6632c930c27a1eb6b07c986d7439b">Migrate files in cephfs to a new file layout pool recursively (github.com)</a></p>
<p>In simple terms, the script does the following:</p>
<ul>
<li><p>Recursively loops through all files and folders, starting with the current folder</p>
</li>
<li><p>For each file, checks if it is already in the desired pool by reading the virtual attribute. If it's already where it is supposed to be, skips it</p>
</li>
<li><p>If the file is not correct and is a normal file, copies it to a scratch folder, then moves it back to the original location. This essentially moves the file to the new pool</p>
</li>
<li><p>Restores file ownership and permissions after the copy/move</p>
</li>
<li><p>Handles symlinks and hard links appropriately as well</p>
</li>
</ul>
<p>All of this is parallelized so that the Ceph backend can be kept busy, by default processing 4 files simultaneously. Since every file copy essentially reads and rewrites the entire file, this is expensive on I/O. The parallelization helps ensure there is always something being copied, even during brief gaps where metadata is being checked, etc.</p>
<p>Ultimately, this worked even though it was slow (reading and rewriting 20+ TB of data takes a while). But it was automatic and happened in the background, and I didn't have to manually re-populate my file system from scratch, which is what I wanted to avoid.</p>
]]></content:encoded></item><item><title><![CDATA[Homelab Storage: A Journey to Ceph]]></title><description><![CDATA[tldr: I started with a modest NAS setup in my homelab, but my curiosity led me to build an 8-node Ceph distributed storage cluster. Ceph, though overkill for a home environment, offers scalable, resilient storage by distributing data across multiple ...]]></description><link>https://ewal.dev/homelab-a-journey-to-ceph</link><guid isPermaLink="true">https://ewal.dev/homelab-a-journey-to-ceph</guid><category><![CDATA[cephfs]]></category><category><![CDATA[Homelab]]></category><category><![CDATA[ceph]]></category><category><![CDATA[TrueNAS]]></category><category><![CDATA[proxmox]]></category><category><![CDATA[unraid]]></category><dc:creator><![CDATA[Erv Walter]]></dc:creator><pubDate>Thu, 23 May 2024 22:54:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1716524012461/c21b8e5d-27d2-4f6b-b874-dc5bba1d8cdf.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>tldr: I started with a modest NAS setup in my homelab, but my curiosity led me to build an 8-node Ceph distributed storage cluster. Ceph, though overkill for a home environment, offers scalable, resilient storage by distributing data across multiple servers. It supports various storage types like block, file, and object storage, making it versatile for different needs. My current setup includes management nodes and storage nodes with NVMe and SATA SSDs, providing high performance and capacity. I use CephFS for Docker containers, RBD for Proxmox VMs, and maintain backups with TrueNAS and Unraid. This journey has transformed my homelab into a robust, interesting playground for storage technologies.</p>
</blockquote>
<p>Storage in my homelab started modest and pretty typical: A NAS appliance and some PCs/Macs that connected to it. Fast forward to today and I have an 8-node Ceph distributed storage cluster and multiple NAS servers. This is the story of my Ceph rabbit hole.</p>
<h3 id="heading-in-the-beginning">In the beginning...</h3>
<p>The beginning was unremarkable. I have historically used a variety of NAS appliances from Synology, TrueNAS, and others. Storage was just a thing that I had. It sat in the background and was uninteresting.</p>
<p>But my homelab is a hobby. I mess with it because I have fun messing with it. The thing that triggered my journey into "let's play with storage" was actually a desire for non-NFS shared storage. I was using NFS mounts on my NAS to provide shared storage for VM virtual disks on my Proxmox cluster. But when I started playing with Kubernetes and Docker Swarm, I found myself wanting shared storage for containers that wasn't simply an NFS mount—if only for the fun of it.</p>
<p>I was already using Proxmox, and the Proxmox console has this tab for "Ceph." So naturally, I started to read about it, and it intrigued me.</p>
<h3 id="heading-ceph">Ceph</h3>
<p>Ceph is overkill for a homelab of my size. A NAS is more cost-effective and straightforward. And boring. Ok, that's out of the way.</p>
<p>Ceph is a distributed storage system designed to scale out by combining a bunch of commodity hardware into one big storage cluster. What's cool about Ceph is that instead of storing data on single-purpose storage appliances like a traditional NAS, Ceph spreads the data across many regular servers, automatically keeping multiple copies so that everything keeps working even if some hardware fails.</p>
<p>So in my homelab, instead of having one big NAS box, I've got a whole bunch of small servers with Ceph installed that all work together to provide one big distributed storage system that's more resilient and scalable than a normal NAS. And not boring.</p>
<p>With Ceph, I just add hard drives (SSDs in my case) to the cluster, and their storage capacity is assimilated into the cluster. I define "pools" to store different kinds of data, and each pool has its own replication rules that determine how data in that pool is spread across devices.</p>
<p>In a traditional NAS with some variation of RAID, things work at the array level. That has a couple of interesting aspects. If you have a drive failure, you replace the drive and the array rebuilds itself to ensure you have protection. Until the drive is replaced, your array is degraded.</p>
<p>Also, if you need more space, you generally can't just add another drive. The array was built with a specific set of drives, and you can't just turn a 5-drive RAID 5 array into a 6-drive RAID 6 array. There are NAS solutions like Unraid that get around this. There are also solutions like ZFS that let you essentially combine multiple "sub arrays" into a larger pool of data. Of course, each has pros and cons.</p>
<p>With Ceph, data is not managed in "arrays" or at the level of drives. Instead, data in the cluster is broken down into blocks of data (called objects by Ceph), and those objects are distributed across available drives. Objects are replicated according to the rules you specify, either with simple "make multiple copies" replication or with erasure coding, which is a parity-like approach similar in concept to RAID5/6 but at the object level instead of at the drive level.</p>
<p>When a drive fails, the pools that have data on that drive become degraded, but Ceph immediately starts making new copies of the lost data on available space elsewhere in the cluster. Usually, within a short period of time (relative to a RAID array rebuild), your cluster is automatically back to a healthy state.</p>
<p>All you need to make this viable is enough free space in your cluster to accommodate one or more drives failing, and that free space can be small amounts across multiple nodes. You can replace the drive that died at your leisure so that you once again have spare capacity in the cluster for future failures.</p>
<p>Adding more capacity is as easy as adding a drive. Ceph will assimilate that new drive into the appropriate pools (based on rules), and behind the scenes, it will start rebalancing where data is stored so that the new drive takes on a share of the responsibility for the existing data.</p>
<p>When it comes to making this pool of storage available to clients, Ceph provides multiple options. Block storage (called RBD) is ideal for things like Proxmox virtual disk images. File storage (CephFS) lets remote servers mount folders similarly to how they do NFS shares, and this is what I use for the shared storage needs of my Docker cluster. Ceph also has S3-compatible object storage for cases where that is needed, but I don't currently have a use case for that in my own lab.</p>
<p>The other aspect I found really interesting was how the distribution of data actually works. In a distributed storage system, you might imagine that any time a client needs data, it first goes to some central server and asks, "Where can I find data X?" The central server then looks up the location and directs the client to the actual node that has the data. But Ceph does data distribution <em>with math</em>.</p>
<p>CRUSH (Controlled Replication Under Scalable Hashing) is the secret sauce that allows any Ceph client to calculate where a particular piece of data is stored in the cluster without having to ask a central lookup table. It does this by using a deterministic hashing function that takes into account factors like the cluster topology and desired replication level.</p>
<p>This means that clients can read and write data directly to the right places without a centralized bottleneck, which is a big part of what allows Ceph to scale out so well in real-world scenarios much larger than my lab.</p>
<p>Sorry. I think Ceph is really cool, and I got to rambling there a bit.</p>
<h3 id="heading-my-ceph-cluster">My Ceph Cluster</h3>
<p>As I mentioned, I originally heard about Ceph because of Proxmox, and in fact, my original cluster was a Proxmox-managed Ceph cluster. That was cool and worked okay, but it meant anytime I wanted to add a new node to the cluster, it had to be a full-blown Proxmox node even if it was never going to perform any VM compute duties.</p>
<p>So recently, I transitioned to a standalone Ceph cluster managed by cephadm, one of the official orchestration engines available for Ceph. I now have a nice little cluster that can be centrally managed either from the built-in dashboard or from the command line on any of the three management nodes.</p>
<p>The orchestration engine makes sure that required services are deployed appropriately, including a monitoring suite. And anytime I add a new drive to any of the nodes, the orchestration engine notices it and adds it to the pool of available storage.</p>
<ul>
<li><p>3x Minisforum MS-01 boxes (Intel i9-12900H, 64GB RAM, 10GbE, Ubuntu 22.04)</p>
<ul>
<li><p>These are essentially management nodes. They run the mon, mgr, and mds services as well as things like Prometheus, Grafana, Alertmanager, and other cluster support services.</p>
</li>
<li><p>Each also has 2x NVMe M.2 SSDs (on top of the drive used for the OS) that back NVMe-specific storage pools for performance-sensitive workloads.</p>
</li>
</ul>
</li>
<li><p>5x general "storage" nodes (misc Intel CPUs, 32GB RAM, 10GbE, Ubuntu 22.04)</p>
<ul>
<li><p>These are mini-ITX tower PCs that were originally used as NAS boxes, so they have plenty of SATA connectivity. These don't run any Ceph services except OSDs.</p>
</li>
<li><p>Each currently has 5x SATA SSDs that are the default storage for pools and certainly for the pool that contains most of my media files.</p>
</li>
</ul>
</li>
</ul>
<p>I have allocated storage as follows:</p>
<ul>
<li><p>A CephFS file system that is mounted by my Docker VMs and serves as shared storage for container data so that a container can start up on any node and have access to its data. The filesystem is backed by two pools depending on the subfolder:</p>
<ul>
<li><p>An NVMe pool that is used for Docker application files by default. This pool uses 3:1 replication.</p>
</li>
<li><p>A higher capacity but slower SATA SSD pool that is used for media files (photos, movies, TV shows, audiobooks). This pool uses 2+2 erasure coding to minimize additional storage costs versus 3:1 replication.</p>
</li>
<li><p>Individual folder trees are mapped to specific backend pools using <a target="_blank" href="https://docs.ceph.com/en/reef/cephfs/file-layouts/">CephFS File Layouts</a></p>
</li>
</ul>
</li>
<li><p>An RBD pool that is used by Proxmox to store VM virtual disks. This pool is backed by the NVMe drives for performance reasons and also uses 3:1 replication.</p>
</li>
</ul>
<p>All nodes in the cluster have 10 GbE connections to each other and to the Proxmox hosts and VMs that connect to the cluster.</p>
<h3 id="heading-backups">Backups</h3>
<p>Ceph is now my primary storage mechanism, but I still have my old NAS servers. They are effectively used as backup targets at this point. One is currently running TrueNAS, and the other I am experimenting with Unraid.</p>
<p>The reason I have two NAS servers is because one of them used to be offsite (3-2-1 backups), but I had to bring it back to rebuild it. Once I decide if I like Unraid more than TrueNAS as a backup NAS, I will probably move one of them offsite again.</p>
<p>The data in the CephFS file system is backed up to each NAS. The TrueNAS server runs rsync tasks hourly to get any changes. For the Unraid server, I am experimenting with <a target="_blank" href="https://www.borgbackup.org/">Borg Backup</a>. I am not sure which I will standardize on in the long term.</p>
<p>Proxmox does its nightly VM backups of virtual disks to the Unraid server currently.</p>
<p>Additionally, I still have offsite backups for the most valuable files (family photos, important documents) via a nightly rclone task on the TrueNAS server that pushes an encrypted backup to <a target="_blank" href="https://www.storj.io/">Storj</a>. Depending on how my experiments with Borg Backup go, I may use it to do remote backups to something like <a target="_blank" href="https://www.borgbase.com/">BorgBase</a>, but that's TBD.</p>
<h3 id="heading-wrapping-up">Wrapping Up</h3>
<p>So that's my journey into the world of Ceph. What started as a simple curiosity has turned into a pretty cool distributed storage setup that keeps my homelab both functional and interesting. Sure, it's overkill for a home environment, and not inexpensive, but it's not boring!</p>
]]></content:encoded></item><item><title><![CDATA[Going Overboard with My Homelab]]></title><description><![CDATA[My homelab is certainly more elaborate than necessary, but that's because I love to tinker. It's one of my hobbies, and truth be told, my lab's main purpose is just that—being a hobby. While some components are genuinely useful, the overall setup is ...]]></description><link>https://ewal.dev/going-overboard-with-my-homelab</link><guid isPermaLink="true">https://ewal.dev/going-overboard-with-my-homelab</guid><category><![CDATA[Homelab]]></category><category><![CDATA[proxmox]]></category><category><![CDATA[ceph]]></category><category><![CDATA[unifi]]></category><category><![CDATA[Home Assistant]]></category><category><![CDATA[opnsense]]></category><category><![CDATA[tailscale]]></category><category><![CDATA[Docker]]></category><category><![CDATA[docker swarm]]></category><dc:creator><![CDATA[Erv Walter]]></dc:creator><pubDate>Thu, 23 May 2024 04:44:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1716426623171/23f60d53-5b8e-475b-ab19-031c468f2b22.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>My homelab is certainly more elaborate than necessary, but that's because I love to tinker. It's one of my hobbies, and truth be told, my lab's main purpose is just that—being a hobby. While some components are genuinely useful, the overall setup is delightfully over the top.</p>
<p>The main parts of the lab are:</p>
<ul>
<li><p>A 3-node <a target="_blank" href="https://www.proxmox.com/">Proxmox</a> cluster that runs VMs</p>
</li>
<li><p>An 8-node <a target="_blank" href="https://ceph.io/">Ceph</a> cluster that provides primary storage</p>
</li>
<li><p>Two separate NAS appliances that serve as backup targets (one of which I plan to move off-site)</p>
</li>
<li><p>A <a target="_blank" href="https://ui.com/">Ubiquiti</a> network with an <a target="_blank" href="https://opnsense.org/">OPNsense</a> firewall/router and a <a target="_blank" href="https://tailscale.com/">Tailscale</a> overlay network connecting most servers</p>
</li>
<li><p>A <a target="_blank" href="https://ui.com/camera-security">Unifi Protect</a> security camera system combined with <a target="_blank" href="https://www.scrypted.app/">Scrypted</a> in a VM to make cameras available to Apple HomeKit</p>
</li>
<li><p><a target="_blank" href="https://www.home-assistant.io/">Home Assistant</a> in a VM</p>
</li>
<li><p>A VM dedicated to Generative AI tinkering</p>
</li>
<li><p>Many containerized applications in a 3-node Docker Swarm cluster</p>
</li>
</ul>
<p>This is the start of a series of posts where I will dig into the details a few of these and give some perspective on why I ended up with the current state. In the meantime, here is a general overview diagram of what things look like as of May 2024:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736723637696/3479ef28-8af5-4eeb-befe-ec97ac192cf1.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Which React Framework to Use?]]></title><description><![CDATA[I am rebuilding TrendWeight from the ground up, and this article is about one aspect of that project.
The new TrendWeight was always going to be created with React. That was a given. I fell in love with the core idea of React years ago (given state -...]]></description><link>https://ewal.dev/which-react-framework-to-use</link><guid isPermaLink="true">https://ewal.dev/which-react-framework-to-use</guid><category><![CDATA[React]]></category><category><![CDATA[Next.js]]></category><category><![CDATA[frameworks]]></category><dc:creator><![CDATA[Erv Walter]]></dc:creator><pubDate>Fri, 28 Jan 2022 04:42:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1643399866631/qW2fI7U2t.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>I am</em> <a target="_blank" href="https://ewal.dev/series/building-trendweight"><em>rebuilding TrendWeight</em></a> <em>from the ground up, and this article is about one aspect of that project.</em></p>
<p>The new TrendWeight was always going to be created with React. That was a given. I fell in love with the <em>core idea</em> of React years ago (given state -&gt; render a UI), and am a drinking-all-the-koolaid fan at this point.</p>
<p>But there are parts of the React ecosystem that are not fun. I don't want to have to figure out bundling. I don't ever want to configure webpack by hand. That's fun for some people, but not me. I just want it to work. And keep in mind that bundling is just *one *of the things you need to figure out to create a production-ready app with React.</p>
<p>Fortunately, there are many "meta frameworks" these days that abstract all of the stuff away and just give you a clean development experience.</p>
<p>For the new TrendWeight, I had to choose one.</p>
<h2 id="heading-create-react-app">Create React App</h2>
<p><a target="_blank" href="https://create-react-app.dev/">Create React App</a> (CRA) has been around for a long time. It was created by Facebook (who also made React), so surely it knows how to do React "the right way". Even though it sometimes positions itself as a quick prototyping tool, I have made real apps with it in the past. I have also maintained apps for years that were based on CRA. It works.</p>
<p>But... It feels like it's nearing the end of its prime (if it hasn't already ended).</p>
<ul>
<li><p>The dev experience is ok, but subpar compared to the alternatives.</p>
</li>
<li><p>It can <em>only</em> make client-side SPA apps (vs server-rendered apps or hybrid apps)</p>
</li>
<li><p>One of the creators of CRA <a target="_blank" href="https://github.com/facebook/create-react-app/issues/11180#issuecomment-874748552">explained recently</a> that he's not sure CRA is the right long term path for React apps.</p>
</li>
</ul>
<p>CRA does what it does really well, but it just doesn't feel like the right framework to start a new project in if I want to stick with something for years to come <em>and</em> be able to take advantage of future improvements to React.</p>
<h2 id="heading-nextjs">Next.js</h2>
<p><a target="_blank" href="https://nextjs.org/">Next.js</a> describes itself as the React framework for production. It has a lot going for it...</p>
<ul>
<li><p>It does everything that CRA does and then a lot more.</p>
</li>
<li><p>It can build client-side-only apps but can also build full stack apps that do server-side rendering (SSR), entirely statically generated sites (SSG), or a sort of mix between the two with incremental static regeneration (ISR).</p>
</li>
<li><p>Excellent SEO. Since pages can be either server rendered at runtime or statically rendered at build time. Search engines see the full contents of pages without having to load and run JavaScript.</p>
</li>
<li><p>It handles routing out of the box (normally would use React Router by hand in a CRA or similar framework)</p>
</li>
<li><p>It does code splitting by page automatically</p>
</li>
<li><p>It has a lot of focus on production performance and includes optimizations for image handling, third party script loading, and font loading.</p>
</li>
<li><p>It has a company (<a target="_blank" href="https://vercel.com">Vercel</a>) and a professional team behind it that is committed to continuing to improve it over time.</p>
</li>
<li><p>The Next.js team appears to be collaborating effectively with both the core React team on things like React Server Components and with the Google web team on optimizing apps for the best user experiences.</p>
</li>
</ul>
<p>Sounds great. There are some possible complications that should be noted, though.</p>
<ul>
<li><p>Because it's not just a client-side app building framework, most of it's coolest features need a backend server (or a serverless platform, which ultimately still runs on servers).</p>
</li>
<li><p>Many would arguably the "best" place to host a Next.js app is at Vercel. The devops experience is top notch, and everything just works on the day that a new feature is added to Next.js. But Vercel's pricing isn't great if you have a large team or if you use a lot of bandwidth. You can host it elsewhere, but you usually lose some of the magic Vercel brings.</p>
</li>
</ul>
<p>The downsides of Vercel's hosting pricing aren't likely to be an issue for me (I'm a one-person team and TrendWeight isn't popular enough to use more than 1TB of bandwidth in a month).</p>
<p>I really like Next.js. I played with it and fell in love. I'm probably biased at this point, to be fair. The <a target="_blank" href="https://trendweight.io">beta version</a> of the new TrendWeight is currently a Next.js app.</p>
<h2 id="heading-vite">Vite</h2>
<p><a target="_blank" href="https://vitejs.dev/">Vite</a> is a relative newcomer for React apps. By the time the chatter about Vite got loud enough for me to look at it, I had already started building the new TrendWeight with Next.js, so this didn't get a super serious look.</p>
<ul>
<li><p>It aims to have a blazing fast developer experience while also making all the hard things easy (similar goes to CRA).</p>
</li>
<li><p>It's very new. I can't point to something specific, but it just "feels" not quite as mature as CRA or Next.js.</p>
</li>
<li><p>I've recently used Vite for a small project at work and I was impressed with the speed of starting up the dev server. It was <em>really</em> nice.</p>
</li>
</ul>
<p>I think it's likely that Vite could be a contender as a long term replacement for CRA for those that are looking for the same kinds of dev and build tooling that CRA provides, but with newer techniques.</p>
<p>Even if I had looked at this before picking Next.js, I probably wouldn't have picked it because I want more than just a client-side app. Vite has some SSR support, but it's still experimental.</p>
<h2 id="heading-remix">Remix</h2>
<p><a target="_blank" href="https://remix.run/">Remix</a> *was *around when I started looking at frameworks but it was in a closed beta available only to people willing to pay for it. The launch was... weird. Much of the messaging came across as a bit like, "Good news, we've finally figured out how to build <em>good</em> web apps. You're welcome!" The perceived smugness turned off a lot of people. That and the fact that it was closed made it uninteresting to me at the time.</p>
<p>Fast forward to November 2021 and Remix is now open source and people have been able to start looking at what it could actually do, and parts of it are very interesting.</p>
<ul>
<li><p>It's more like Next.js than CRA or Vite in that it is a full stack framework with a server.</p>
</li>
<li><p>The server can run on lots more types of hosts than Next.js including both Cloudflare Workers and Fly.io meaning your "server" can run fully on the "edge" right next to your users.</p>
</li>
<li><p>It presents a different (for React apps, at least) way of thinking about the boundary between client and server. For example, you write functions that are responsible for loading data and that sit beside your components. The Remix server seamlessly uses these functions when rendering your UI without you having to write typical code to call an API to get data. The Remix team asserts that this is a better way.</p>
</li>
</ul>
<p>It's interesting, but similar to the Vite situation, it because interesting well after I had already started building with Next.js. I'm still watching it carefully though.</p>
<p>I'm inclined to build something with it at some point just to get some hands on experience. My <a target="_blank" href="https://games.ewal.net">games site</a> is due for a refresh, and I may try Remix for that.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>As I mentioned above, for TrendWeight I picked Next.js. I've been pretty happy with the choice so far. It's been easy to get up and running, the dev experience and the devops experiences have been great. I haven't run into any major roadblocks, so I'm feeling pretty good about the choice.</p>
<p>If Next.js worked out-of-the-box on Cloudflare Workers (CloudFlare is killing it with their Edge Compute services), I think it would be a slam dunk choice for most of my projects. I'm not sure how motivated Vercel is to make Next.js work on a competitive hosting service though.</p>
]]></content:encoded></item><item><title><![CDATA[TrendWeight: Why Rebuild?]]></title><description><![CDATA[So I'm rewriting TrendWeight from scratch.  Before jumping into technical details about the new TrendWeight web app, let me set the stage by describing how the currently-live app works.
First, I should be clear that TrendWeight works fine.  The pictu...]]></description><link>https://ewal.dev/trendweight-why-rebuild</link><guid isPermaLink="true">https://ewal.dev/trendweight-why-rebuild</guid><category><![CDATA[ASP.NET]]></category><category><![CDATA[Azure]]></category><category><![CDATA[technology stack]]></category><dc:creator><![CDATA[Erv Walter]]></dc:creator><pubDate>Thu, 27 Jan 2022 05:31:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/K5sjajgbTFw/upload/v1643261210287/3BaVpO71N.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>So I'm rewriting <a target="_blank" href="https://trendweight.com">TrendWeight</a> from scratch.  Before jumping into technical details about the <em>new</em> TrendWeight web app, let me set the stage by describing how the currently-live app works.</p>
<p>First, I should be clear that TrendWeight works fine.  The picture I chose for this article is an exaggeration.  TrendWeight is not broken.  I doesn't have meaningful scalability problems.  It's <em>fine</em>.  </p>
<p>So why bother with a rewrite?  It's just old.  </p>
<p>There are a number of improvements that people have asked for over the years, and I've avoided tackling them because I don't enjoy working in that old tech stack anymore.  I could, but it's just not fun.</p>
<p>So the biggest reason to rewrite it is that <strong>rewriting it will be fun.</strong>  Not only does the app get freshened up, but I also get to play with new tech!</p>
<h2 id="heading-trendweight-tech-stack">TrendWeight Tech Stack</h2>
<p>So what is the old tech behind the current TrendWeight?</p>
<ul>
<li>The web framework is ASP.NET MVC (version 3).  </li>
<li>The frontend uses vanilla JavaScript and several old libraries, notably including jQuery (!) and <a target="_blank" href="https://knockoutjs.com/">Knockout.js</a> 2.1.  The capabilities of Knockout were pretty great, and in a way, they were ahead of their time.</li>
<li>The core chart component that renders the <a target="_blank" href="https://trendweight.com/demo/">main weight graph</a> is <a target="_blank" href="https://www.highcharts.com/products/stock/">Highcharts Stocks</a>.  This is still a great component and I plan to keep using it, though I will move to the latest version.</li>
<li>Styling is <a target="_blank" href="https://getbootstrap.com/2.0.4/">Bootstrap v2.0.4</a></li>
<li>Data is stored in a SQL Server database with nothing weird with how it is used.</li>
<li>User management / authentication uses ASP.NET Membership (the precursor to ASP.NET Identity).  Essentially it's a series of SQL tables in the same SQL database along with a set of C# classes that manage the gross parts of securely managing passwords, etc.</li>
<li>The site is hosted in <a target="_blank" href="https://azure.microsoft.com/en-us/services/app-service/">Microsoft Azure App Service</a></li>
</ul>
<p>TrendWeight just hasn't gotten the care and feeding it probably deserved over the past decade and so lots of its dependencies are woefully out of date.  Some days I'm surprised that it still actually works in modern browsers 10 years after it was initially created.</p>
<p>So something needs to be done to bring the app up to date and to make it maintainable.  Instead of incrementally updating things piece by piece, I decided a while ago that it would be easier to just start fresh using a modern tech stack.</p>
<p>A new, clean code base will also let me make the source code available without being embarrassed by how clunky everything is.</p>
<p>I'll talk more about the planned tech stack in another article...</p>
]]></content:encoded></item><item><title><![CDATA[Let's Try This Again]]></title><description><![CDATA[I go through cycles where I get motivated to write a bunch of articles about interesting things I'm doing.  I write a few.  I get caught up with other things.  I forget to write the rest.  Before long, the blog is stagnant again.
Will this time be di...]]></description><link>https://ewal.dev/lets-try-this-again</link><guid isPermaLink="true">https://ewal.dev/lets-try-this-again</guid><category><![CDATA[blog]]></category><dc:creator><![CDATA[Erv Walter]]></dc:creator><pubDate>Wed, 26 Jan 2022 23:17:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/JAsWBt-IOj8/upload/v1643307755887/TbyYkU8oU.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I go through cycles where I get motivated to write a bunch of articles about interesting things I'm doing.  I write a few.  I get caught up with other things.  I forget to write the rest.  Before long, the blog is stagnant again.</p>
<p>Will this time be different?  Probably not.  Am I going to start a new cycle anyway?  Yes!</p>
<p>Anyway, there are a few things bouncing around in my head that I want to capture and share with others that generally fall into these categories...</p>
<ul>
<li>Building apps with <a target="_blank" href="https://nextjs.org/">Next.js</a></li>
<li>React and its ecosystem (Next.js, Remix, etc.)</li>
<li>Smart Stuff, Home Assistant, and Node Red</li>
</ul>
<p>This cycle will be done with <a target="_blank" href="https://hashnode.com/@ervwalter/joinme">Hashnode</a> because I'm at a point where I don't want to spend time maintaining a blog platform.  I've done that in the past.  This time I'm going to let someone else manage it.</p>
<p>Let's do it...</p>
]]></content:encoded></item><item><title><![CDATA[Day-to-Day Weight Fluctuations and Mental Stress]]></title><description><![CDATA[Note: This was originally published on the TrendWeight blog. Since that blog is no longer active, I am reposting the few useful articles from that site here for posterity.
A common question on various weight loss forums I see is, "How often should I ...]]></description><link>https://ewal.dev/day-to-day-weight-fluctuations-and-mental-stress</link><guid isPermaLink="true">https://ewal.dev/day-to-day-weight-fluctuations-and-mental-stress</guid><category><![CDATA[trendweight]]></category><dc:creator><![CDATA[Erv Walter]]></dc:creator><pubDate>Fri, 05 Oct 2012 17:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/Of8C-QHqagM/upload/254d97366adeb470278ca4f95728fbe4.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Note: This was originally published on the TrendWeight blog. Since that blog is no longer active, I am reposting the few useful articles from that site here for posterity.</em></p>
<p>A common question on various weight loss forums I see is, "How often should I weigh myself?" There are lots of reasonable answers to that question, but I tend to favor "Every Day" as the best answer, but there is a catch! If you don't want to go insane, <strong>you absolutely must ignore what the scale says each day</strong> and only look at the trend over time. What does that mean?</p>
<h2 id="heading-normal-daily-weight-fluctuations"><strong>Normal Daily Weight Fluctuations</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716327384874/3b00b28f-3521-4da7-9fd2-ae29d315937e.png" alt="Diagram titled &quot;Typical Human Mass Throughput Pounds/Day&quot; showing a human figure with arrows indicating mass intake and output. Inputs: Food 2.5, Water 9.2, O2 1.8. Outputs: Solids 0.3, Water 11, CO2 2.2. Total intake and output are both 13.5 pounds per day." /></p>
<p>John Walker talks about this in much more detail in his book, <a target="_blank" href="http://www.fourmilab.ch/hackdiet/">The Hacker's Diet</a>, but I'll summarize the key point here. A lot goes into your body every day and a lot comes out. The food you eat, the liquids you drink, the air you breath, etc all come in. And in general, an equal amount of those things come back out as well. For a typical person, these add up to 13.5 pounds of "stuff" coming in and out of your body each day:</p>
<p>But keep in mind that that balance of 13.5 pounds in and 13.5 pounds out is only an average over time. On any given day, at any given time of the day, your intake may be more or less than your outputs often by as much as 1-2 pounds. Keep this in mind: <em>Even if you are not gaining or losing weight (i.e. fat or muscle), if you weigh yourself every day, the scale will still show your weight going up and down each day, often by as much as 1-2 pounds.</em> That's just the way it is. What matters is that, in that example, the fluctuations will be consistently around a particular weight.</p>
<h2 id="heading-mental-stress"><strong>Mental Stress</strong></h2>
<p>Losing weight is hard. Really hard. And a good portion of it is mental. It takes real mental effort to really change your lifestyle or behavior in order to lose weight. I you are trying to lose weight and you step on the scale tomorrow morning and it says you gained half a pound, that may psychologically crush you and make you question your resolve, not to mention how it would feel to see that happen 5 out of every 10 days. But as I just explained, if you weigh yourself every day, the scale is going to go up on some days.</p>
<p>The solution is to focus on the weight trend and not any individual day's weight. By focusing on the weight trend, the random day-to-day fluctuations will fade into the background and you can focus on the fact that you're making consistent progress towards your goal. Figuring out your weight trend is pretty easy to do, but I'll come back to that in a minute.</p>
<p>Wait, if weighing yourself every day has all these issues, why do it? Why not just weigh your self less often? That is an alternative option, but remember that if random day-to-day changes may often be as much as 1-2 pounds and you want to make sure that real weight loss isn't hidden by these, they you may have to weigh yourself really infrequently.</p>
<p>If you are trying to lose 1 pound per week, you'd have to weigh yourself only once every 2-3 weeks in order to make sure that your real weight loss always shows through the "noise" of random daily fluctuation. That's a long time to go without any positive feedback. By weighing yourself every day and focusing on the trend instead of the number on scale, I get daily reinforcement that I am slowly but surely making progress. That helps. A lot.</p>
<h2 id="heading-focusing-on-the-trend"><strong>Focusing on the Trend</strong></h2>
<p>Let me show you in pictures what this really means. I have been weighing myself nearly every day for the past year. First let's look at what my day-to-day weight changes have been over the past year. In this picture, the green arrows are days where the scale showed a lower number than the day before (yay! I lost weight), and the red arrows are days where the scale showed a higher number than the day before (oops! I gained weight):</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716327652868/4bc061b6-70e9-44c8-8121-9a73a550e350.png" alt /></p>
<p>So how did I do? Over the long term, if you looked at the actual numbers, it would be obvious that I lost weight, but in the trenches, day-to-day, if I was focusing on the number on the scale, I would have felt like I was on a crazy roller-coaster and likely I would have been highly frustrated:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716327667631/ae622fa5-c923-4453-a450-5a093f0606ac.png" alt /></p>
<p>Ok, so that sucks. But what kind of difference does it make if I look at the weight trend each day instead of the number on the scale? Let's say each day I figure out what my average weight was over the past 10 days (let's call that my "trend weight"). Because this trend weight is an average of many days, the random day-to-day fluctuations will mostly cancel each other out, and the trend weight will slowly go down each day if my weight is decreasing over time. You may commonly hear this idea of a trend weight referred to as a "moving average".</p>
<p>But let's not worry about the math right now, and let's just see what this does to my day to day mental stress. If I take my daily weight data for the last year and look at trend weights instead of what the scale said on any given day, here is the result:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716327680571/32ad1c88-c869-4e6d-b561-c95a711da12d.png" alt /></p>
<p>What a difference! For the first two thirds of that time range, I was pretty happy almost every day because almost every day, the trend weight got a little smaller day after day. I can tell you that this reinforcement was a huge help, especially at the beginning when I was not sure I had it in me to lose the weight I needed to lose.</p>
<p>In the last third of this time range, I had some hiccups. I really <em>was</em> gaining weight, so the trend weight wasn't lying. If you are curious, the first long stretch of weight gain was the result of a change to my blood pressure medications that affected how much water I retained and as a result, I gained 10 pounds over the course of two weeks. And the second long stretch was a family vacation where I fell off the wagon, so to speak. The good news is that I'm back on track and have re-lost all the pounds I gained over the summer.</p>
<p>Ignore the specifics of my situation for a moment and look at the two pictures again. I hope you can see what a difference looking at the weight trend makes. I think the daily, positive reinforcement was key to my really getting over the initial mental hurdle at the start.</p>
<h2 id="heading-how"><strong>How?</strong></h2>
<p>Ok, so I think you get my point that I am a huge fan of daily weight measurement combined with the use of a moving average to look at the weight trend instead of any individual scale reading. Doing this is not as hard as you might imagine and doesn't require you to be a math wiz either.</p>
<p>In my opinion, the best actual technique to monitor your weight trend over time is the one that John Walker describes in The Hacker's Diet in the "Signal and Noise" chapter. There are lots of free tools available that handle all the math that John talks about so that you don't have to worry about it.</p>
<p>First, is the web application I built, initially just for myself, to do this kind of tracking for people that have <a target="_blank" href="http://amzn.com/B0077L8YFI?tag=trendweight-20">Fitbit</a> or <a target="_blank" href="http://amzn.com/B002JE2PSA?tag=trendweight-20">Withings</a> WiFi scales. If you have one of those scales, <a target="_blank" href="https://trendweight.com/">TrendWeight</a> is free and will automatically pull your weight readings each day and show you your weight trend. Note, you actually don't need a WiFi scale to use TrendWeight. It works just as well even if you just have a FitBit.com account and are using their iPhone app to manually enter your weight each day.</p>
<p>But there are lots of other options if you don't have a fancy electronic scale as well. There are both <a target="_blank" href="http://itunes.apple.com/app/true-weight/id287941226?mt=8">iPhone apps</a> and <a target="_blank" href="https://play.google.com/store/apps/details?id=net.cachapa.libra&amp;hl=en">Android apps</a> that have this approach baked in. You manually enter your weight each day, and they do all the math for you.</p>
<p>If you don't want to use your smartphone, there are websites that will let you manually enter your weight each day and will also do all the math. John Walker created the <a target="_blank" href="http://www.fourmilab.ch/hackdiet/online/hdo.html">Hacker's Diet Online</a> to do this, and <a target="_blank" href="http://physicsdiet.com/">Physics Diet</a> is another popular choice.</p>
<p>And last but not least, you don't even need a computer to do this. John Walker also explains how you can do this yourself with a sheet of paper and a pencil you keep next to your bathroom scale and all you need is basic addition and subtraction (no calculator needed). You can read about it in the <a target="_blank" href="http://www.fourmilab.ch/hackdiet/e4/">Paper and Pencil</a> chapter of his book.</p>
<p>If you are trying to lose weight, first, good for you! Second, I hope I have convinced you to at least try weighing yourself daily and calculating your trend weight. I honestly believe this approach will help you see your daily successes without getting distracted by random day-to-day fluctuations.</p>
]]></content:encoded></item></channel></rss>