<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Blog | Drew Skwiers-Koballa]]></title><description><![CDATA[Blog | Drew Skwiers-Koballa]]></description><link>https://blog.drewsk.tech</link><generator>RSS for Node</generator><lastBuildDate>Mon, 20 Apr 2026 08:11:34 GMT</lastBuildDate><atom:link href="https://blog.drewsk.tech/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[2026 Reset Plan]]></title><description><![CDATA[2025 wasn't a great year for me at work. If you use products that I'm involved in, well, you're probably not too surprised to hear me say that. What might be surprising is that when I was looking in the retrospective mirror it was less about the shor...]]></description><link>https://blog.drewsk.tech/2026-reset-plan</link><guid isPermaLink="true">https://blog.drewsk.tech/2026-reset-plan</guid><category><![CDATA[reset-challenge]]></category><category><![CDATA[Productivity]]></category><category><![CDATA[burnout]]></category><dc:creator><![CDATA[Drew Skwiers-Koballa]]></dc:creator><pubDate>Fri, 23 Jan 2026 22:25:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767646591879/de5f9d69-6382-4850-b702-afd55a3caefe.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>2025 wasn't a great year for me at work. If you use products that I'm involved in, well, you're probably not too surprised to hear me say that. What might be surprising is that when I was looking in the retrospective mirror it was less about the short-term (1-year) outcomes and more about how I felt along the way. Every day of 2025 felt like a chaotic mad dash to try to salvage, stretch, and find compromises.</p>
<p>It wasn't sustainable, and I had a few near-breaking points. I don't want that for this year (on multiple levels) so I've been intentional about the first steps of 2026 to bring a new tact and perspective to the way I internally operate.</p>
<h2 id="heading-gratitude">Gratitude</h2>
<p>Before I dive in, I wanted to extend my gratitude to Kelly Vaughn with the <a target="_blank" href="https://www.modernleader.is/">Modern Leader</a> for creating and sharing the reset challenge. Provided as a series of bite-sized prompts over the first 3 weeks of the year, it helped me in examining the patterns that I was exhibiting and what ingrained habits I have that perpetuate this situation. There's a lot of leadership-branded stuff in the market that's laden with the bullshit or heavy on a singular viewpoint, but the reset challenge was approachable and kept space for individuality.</p>
<hr />
<h2 id="heading-1-where-i-was">1. Where I was</h2>
<p>￼
I was operating as a high-capacity, high-responsiveness PM, constantly taking on more work than planned in the name of being useful, adaptable, and dependable—especially during organizational instability.</p>
<p>In practice, this meant:</p>
<ul>
<li>Letting “urgent” inputs bypass my planned work</li>
<li>Regularly pushing aside my own priorities to do things that felt fast or helpful</li>
<li>Carrying work “temporarily” that never actually moved off my plate</li>
</ul>
<p>On paper, things kept moving. Internally, my focus, momentum, and mental health steadily eroded.</p>
<h2 id="heading-2-the-pattern-i-identified">2. The pattern I identified</h2>
<p>Over-pressuring myself to do more and different things—by default, not by choice.</p>
<p>This pattern shows up as:</p>
<ul>
<li>New emails, messages, and asks jumping the queue over my todo list</li>
<li>Convincing myself something will “only take an hour” and doing it immediately</li>
<li>Defaulting to me as the solution instead of finding the right owner</li>
</ul>
<p>The result wasn’t flexibility—it was constant self-interruption.</p>
<h2 id="heading-3-why-it-matters">3. Why it matters</h2>
<p>When this pattern runs unchecked, it quietly costs me:</p>
<ul>
<li>Mental health and emotional stability</li>
<li>Momentum on meaningful, multi-week work</li>
<li>A sense of accomplishment at the end of the day</li>
</ul>
<p>Most dangerously, it trains my brain to prioritize urgency over importance, which is the opposite of what good PM leadership requires—especially at senior levels.</p>
<h2 id="heading-4-the-experiment-im-running">4. The experiment I’m running</h2>
<p>￼
This is a time-bound learning experiment, not a permanent rule.</p>
<p>Experiment:</p>
<ul>
<li>For 2–3 working days, I will not accept “day-of” work</li>
<li>Every incoming ask gets written down, not acted on</li>
<li>One week later, I will review:<ul>
<li>What truly mattered</li>
<li>What no one noticed</li>
<li>What only felt urgent in the moment</li>
</ul>
</li>
</ul>
<p>Supporting constraints:</p>
<ul>
<li>Everything I work on must go through my todo list—even exceptions</li>
<li>If I choose to diverge, it’s explicit and visible</li>
</ul>
<p>This adds just enough friction to interrupt autopilot without breaking my job.</p>
<h2 id="heading-5-what-success-looks-like">5. What success looks like</h2>
<p>Success is not “perfect focus” or “zero distractions.”</p>
<p>Early signals this is working:</p>
<ul>
<li>I end more days feeling done, not depleted</li>
<li>I feel prepared for the things that matter—or calm about what doesn’t</li>
<li>Weekly reviews of my 1–3 month goals trigger less anxiety and more clarity</li>
<li>Decisions feel lighter because fewer things are competing for attention</li>
</ul>
<p>These are signals, not metrics. If they show up, I’m on the right path.</p>
<h2 id="heading-6-what-im-not-trying-to-change-right-now">6. What I’m not trying to change right now</h2>
<p>This reset is intentionally narrow.</p>
<p>I am not trying to:</p>
<ul>
<li>Become less helpful or less responsive overall</li>
<li>Fix organizational instability or external chaos</li>
<li>Eliminate distractions entirely</li>
<li>Prove anything about my worth or capability</li>
<li>Operate at 100% focus every day</li>
</ul>
<p>I am trying to: Turn the volume down on a harmful default—by about 20–30%—and reclaim enough cognitive space to do the work that actually matters.</p>
<hr />
<p>🥂 Cheers to 2026.</p>
]]></content:encoded></item><item><title><![CDATA[Keeping core documents on-hand]]></title><description><![CDATA[One of the challenges of being able to communicate consistently and constantly is having the right documents on hand. When spread across multiple projects or teams, you’ll find these documents may be in multiple locations - but if they have a local s...]]></description><link>https://blog.drewsk.tech/keeping-core-documents-on-hand</link><guid isPermaLink="true">https://blog.drewsk.tech/keeping-core-documents-on-hand</guid><category><![CDATA[macOS]]></category><category><![CDATA[office365]]></category><category><![CDATA[Productivity]]></category><dc:creator><![CDATA[Drew Skwiers-Koballa]]></dc:creator><pubDate>Fri, 16 Jan 2026 00:30:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767804374348/b58a6835-82ef-4ddd-9955-909b964d7f59.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One of the challenges of being able to communicate consistently and constantly is having the right documents on hand. When spread across multiple projects or teams, you’ll find these documents may be in multiple locations - but if they have a local sync capability (like OneDrive + Sharepoint) you can create your own personal briefcase of priority files to have on hand.</p>
<p><em>Yes, I know OneDrive apps offer a pinned/favorite option for files - but they’re separated by app. I usually have Word docs, PowerPoint files, etc all mixed together and this has been the best solution for that challenge.</em></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767803877431/1e06a72e-bf47-48cb-92c1-da7cf0de6a70.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-overall-setup">Overall setup</h2>
<p>You designate a local folder, like <strong>~/Documents/CoreDocuments</strong>, to be your go-to directory for the files you need frequently. I like to keep this folder pinned to the dock in folder display mode for 1-second access, but you can also make it easy to get to by dragging it into the <em>Favorites</em> in Finder.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767815498730/5e988289-8744-4b75-bd95-1574ef63f79e.png" alt class="image--center mx-auto" /></p>
<p>From here, we’ll be placing <strong>Aliases</strong> in the folder to the files we use frequently. You can manually create these for each file by selecting “Make Alias” from the context menu and then dragging the newly created alias into the your CoreDocuments directory (or whatever you’re calling it). This is still a win in terms of having better access to files, but since the destination will always be the same - it seems silly to always have a 2nd step.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767815753081/f8bddc40-89c4-4dcc-b70e-ab4732bf5b70.png" alt class="image--center mx-auto" /></p>
<p>Enter Shortcuts, Applescript, and the Quick Actions menu.</p>
<h2 id="heading-leveraging-shortcuts">Leveraging Shortcuts</h2>
<p>Open the <strong>Shortcuts</strong> app on your computer - a powerhouse of a utility for creating automations. This is the 2-step shortcut we’re going to create:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767831576995/d67c4521-ac65-4785-8a80-8e59d598c0f7.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p>Start a new shortcut and using the info icon on the right, select “Use as Quick Action” and “Finder”.</p>
</li>
<li><p>In the 1st step of the shortcut, change the “Receive” option to select only Files</p>
</li>
<li><p>Using the cascading squares icon on the right, find “Run AppleScript” and double-click to bring it onto the canvas.</p>
</li>
<li><p>Click on the “Run AppleScript” step where “Shortcut Input” is linked and change the type dropdown to File.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767803894320/b73d03d8-0129-4c2f-af06-3fdb72e20e06.png" alt class="image--center mx-auto" /></p>
<ol start="5">
<li><p>In the body of the AppleScript step, paste in the AppleScript snippet provided below. Modify the 2nd line to match the path to the folder you created to store your aliases in.</p>
</li>
<li><p>Click on the header of the window to rename the shortcut and optionally change the icon up.</p>
</li>
</ol>
<p>That’s it. Close the window, and get ready to test it (details below the AppleScript).</p>
<h3 id="heading-the-applescript">the AppleScript</h3>
<pre><code class="lang-bash">on run {input, parameters}
    <span class="hljs-built_in">set</span> destPath to <span class="hljs-string">"/Users/drewsk/Documents/CoreDocuments"</span>
    <span class="hljs-built_in">set</span> destFolderAlias to POSIX file destPath

    tell application <span class="hljs-string">"Finder"</span>
        <span class="hljs-built_in">set</span> destFolder to folder destFolderAlias

        repeat with anItem <span class="hljs-keyword">in</span> input
            make new <span class="hljs-built_in">alias</span> file at destFolder to anItem
        end repeat
    end tell

    <span class="hljs-built_in">return</span> input
end run
</code></pre>
<h2 id="heading-the-shortcut-in-action">The shortcut in action</h2>
<p>Using the context menu on any file (or selecting multiple), under the “Quick Actions” branch your shortcut will now appear. Selecting it makes an alias immediately in your designated folder, making it much easier to get back to this file multiple times in the future.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767815703991/13706488-2bd5-4a91-912d-5369db96d10d.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-helpful-links">Helpful links</h2>
<p>It took just a bit for me to get this working how I wanted, and this article was helpful in working with the raw AppleScript before moving into Shortcuts:</p>
<p><a target="_blank" href="https://www.codegenes.net/blog/how-do-i-create-a-macintosh-finder-alias-from-the-command-line/">https://www.codegenes.net/blog/how-do-i-create-a-macintosh-finder-alias-from-the-command-line/</a></p>
]]></content:encoded></item><item><title><![CDATA[Love for the Stream Deck: Microsoft Teams]]></title><description><![CDATA[I was on the fence about getting a Stream Deck for a while, and when I finally jumped on it I immediately found it worth it for a single application interface - Microsoft Teams... No more searching for the right button at the opportune moment (mute? ...]]></description><link>https://blog.drewsk.tech/love-for-the-stream-deck-microsoft-teams</link><guid isPermaLink="true">https://blog.drewsk.tech/love-for-the-stream-deck-microsoft-teams</guid><category><![CDATA[stream deck]]></category><category><![CDATA[microsoft-teams]]></category><dc:creator><![CDATA[Drew Skwiers-Koballa]]></dc:creator><pubDate>Thu, 10 Jul 2025 01:49:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1752026943035/c6233d2c-a68d-457b-98a6-e367bf0fe288.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I was on the fence about getting a Stream Deck for a while, and when I finally jumped on it I immediately found it worth it for a single application interface - Microsoft Teams... No more searching for the right button at the opportune moment (mute? hang up? thumbs up! ugh I just raised by hand) - the setup was quick and simple.</p>
<hr />
<h2 id="heading-what-is-a-stream-deck">What is a Stream Deck?</h2>
<p>A pricy piece of kit for your desk that adds a bunch of extra buttons <em>off</em> of your keyboard with explicit uses - you decide what they do. The smallest (Neo) Stream Deck still runs about $100 but you can also find Stream Decks of multiple generations pre-loved from places like eBay.</p>
<p><a target="_blank" href="https://www.elgato.com/stream-deck">https://www.elgato.com/stream-deck</a></p>
<p><em>The Stream Deck is clutch for anything you do on your computer that takes multiple clicks to get done or requires precise motions.</em></p>
<h2 id="heading-microsoft-teams-integration">Microsoft Teams integration</h2>
<p>I use Microsoft Teams a lot (too much!) at work (PM at Microsoft, disclosure reminder) - over half my day gets consumed by meetings if I’m not careful. The Stream Deck plugin for Microsoft Teams (<a target="_blank" href="https://marketplace.elgato.com/product/microsoft-teams-da5e2bbc-197c-4afe-8a85-a9941bf52697">https://marketplace.elgato.com/product/microsoft-teams-da5e2bbc-197c-4afe-8a85-a9941bf52697</a>) was the first plugin I installed when setting up the deck and it was way easier than I expected.</p>
<p>The key step in my environment was to enable integrations (APIs) under the <strong>Privacy</strong> tab in the Teams settings (cmd + comma, thankfully). As long as you’ve toggled on the “enable API” option under “Third-party app API” for Teams, then your Stream Deck can quickly control your in-call options.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752111496562/4cf98749-be4a-4a15-a568-43202042c9dc.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-is-it-worth-it">Is it worth it?</h2>
<p>To be honest, for the volume of calls I’m on, the Stream Deck has been worth it for just the Teams integration alone. That doesn’t say great things about the Microsoft Teams interface (gosh did I just raise my hand when trying to thumbs up again?) but to be fair there’s some paradigm complications when you need to layer the call window in with all the other interfaces on the machine. I really don’t like the small pop-up window that appears when the call window isn’t visible for a moment and prefer the tactility of the external buttons.</p>
]]></content:encoded></item><item><title><![CDATA[Go to the Terminal]]></title><description><![CDATA[I want to go to there
There’s 2 frequent entry points I use to go into the terminal - either from general file explorer (Finder) or from the file explorer in VS Code. In VS Code, there’s a context menu item that makes it a easier and in Finder I have...]]></description><link>https://blog.drewsk.tech/go-to-the-terminal</link><guid isPermaLink="true">https://blog.drewsk.tech/go-to-the-terminal</guid><category><![CDATA[macOS]]></category><category><![CDATA[Productivity]]></category><dc:creator><![CDATA[Drew Skwiers-Koballa]]></dc:creator><pubDate>Sat, 05 Jul 2025 16:30:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751732833908/7309b9db-9ee6-4a00-9573-f36b9de3ead2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-i-want-to-go-to-there"><em>I want to go to there</em></h2>
<p>There’s 2 frequent entry points I use to go into the terminal - either from general file explorer (Finder) or from the file explorer in VS Code. In VS Code, there’s a context menu item that makes it a easier and in Finder I have a quick customization that saves me a bunch of time. If you ever use the terminal you may want to know about these 2 options:</p>
<h1 id="heading-from-vs-code">From VS Code</h1>
<p>I appreciate how the terminal is integrated with VS Code from the ability to quickly get it open with a common keystroke (ctrl+`) to the extensibility points. Its just pretty rare that I’m not working in VS Code and using the terminal for some very repeatable interaction (git, build, etc). Even more esoteric things, like duplicating a file in a different folder or managing multiple projects in a single VS Code workspace, lead me to the terminal. This is probably why I constantly point out the “Open in Integrated Terminal” context menu item in VS Code.</p>
<p>This could be a nested folder or just a long folder name you can’t remember the first couple letters of, but right-clicking on a folder in VS Code’s file explorer gives you an “open in integrated terminal” option towards the top. Selecting this gives you a <em>new</em> terminal where it starts from that location.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751057142595/f65c5bbd-9a50-4918-8fc1-8978cd7694de.png" alt class="image--center mx-auto" /></p>
<p>The VS Code menu item may not save as much time per use as the next option, but since I use it so much I wanted to mention it.</p>
<h1 id="heading-from-finder">From Finder</h1>
<p>There’s 2 things I usually want to do from a folder nested someone on my machine - that’s open it in a terminal or open it in an IDE (usually VS Code). <a target="_blank" href="https://github.com/Ji4n1ng/OpenInTerminal">Open in Terminal</a> is an app for macOS that extends the Finder toolbar to simply do 1 or more actions. It’s not ideal that I have to install an app to do something somewhat fundamental for system navigation, but the general simplicity of the Finder interface is preferred over having an extra 10 options I don’t use just to get these 2 out of the box.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751731058388/ba27f35e-d590-4e03-b77d-1826bac0dea6.png" alt class="image--center mx-auto" /></p>
<p>Because I lean towards the streamlined interface, I use OpenInTerminal-Lite with shortcuts for opening in the terminal and in VS Code placed in the toolbar. The apps that it opens, like which terminal (Terminal, iTerm2) or which editor (VS Code, IntelliJ, Cursor) are customizable by you with a quick command or during first launch setup. Check it out - all of it, since its open source - at <a target="_blank" href="https://github.com/Ji4n1ng/OpenInTerminal">https://github.com/Ji4n1ng/OpenInTerminal</a>.</p>
<p>You can set it up to open whichever apps you prefer, just about. As long as the app supports the <code>open</code> command with the file/folder following, you’re likely to have success going beyond the impressive initial list of apps supported.</p>
<h1 id="heading-not-using-the-terminal-much">Not using the terminal much?</h1>
<p>Even if you’re not at home in the terminal, the OpenInTerminal utility for Finder will help you out in opening the editor. The terminal isn’t the end-all-be-all of using your computer, but if you do development or IT administration, chances are that it can make you more effective over time. That said - the terminal is something you pretty much only learn with practice and then continued discovery. Better get in there.</p>
]]></content:encoded></item><item><title><![CDATA[Browser-based scam that originates in Google ads]]></title><description><![CDATA[Source: https://www.malwarebytes.com/blog/news/2025/06/scammers-hijack-websites-of-bank-of-america-netflix-microsoft-and-more-to-insert-fake-phone-number
Google “search results” ads purchased by malicious actors point to the legitimate websites of bi...]]></description><link>https://blog.drewsk.tech/browser-based-scam-that-originates-in-google-ads</link><guid isPermaLink="true">https://blog.drewsk.tech/browser-based-scam-that-originates-in-google-ads</guid><category><![CDATA[news-commentary]]></category><dc:creator><![CDATA[Drew Skwiers-Koballa]]></dc:creator><pubDate>Thu, 19 Jun 2025 14:38:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750344243717/666a1dee-1d25-4ff2-9bfd-7c978b98f8c6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Source: <a target="_blank" href="https://www.malwarebytes.com/blog/news/2025/06/scammers-hijack-websites-of-bank-of-america-netflix-microsoft-and-more-to-insert-fake-phone-number">https://www.malwarebytes.com/blog/news/2025/06/scammers-hijack-websites-of-bank-of-america-netflix-microsoft-and-more-to-insert-fake-phone-number</a></p>
<p>Google “search results” ads purchased by malicious actors point to the legitimate websites of big name companies, but with use of those websites’ content handling techniques (query strings) to add additional content to the page. You land on the real website, but the search bar is filled with “CALL NOW FOR HELP 1-800-123-4567”</p>
<p>The summary is a bit sensationalized, since “hijack” makes it sound like the malicious actors have complete control of the car. This is more akin to if they’ve just slapped a bumper sticker on it with their information before walking away. Dangerous and bad, still yes.</p>
<p>Technically, the buck stops at the website of the company to protect itself from being used in harmful ways against its users. Savvy users will likely pick up on these being scams, but even smart people have urgent moments where they’re drawn in by the very scam techniques they’ve learned to notice. Do you lock your search results pages down to internal referrers? Do you block google.com search results from your search results pages?</p>
<p>Query strings are <em>really convenient</em> ways to pass small amounts of information between pages. The question mark + q + equals sign is synonymous with looking for something on a website. Will this campaign through Google ads serve as a driver to move web technology off of query strings? I’ll be honest, adding cookies in GET requests or switching to multi-request sequences are both non-optimal solutions. I’m glad someone smarter than me can sort it out.</p>
<p>Speaking of Google ads… they’re complicit here too. Can Google avoid serving these ads? Sure, with a similar amount of technical difficulty as the websites detecting fraudulent use but they’re not using industry standard tech. Google is cultivating the breeding ground for implicit trust in their quest for monetization of the search results. The betrayal in the search results is that the ads continue to become disguised as organic search results. How far down the page do I need to scroll before I can start trusting the links and page summaries?</p>
<p>Malware bytes takes the time to mention their browser extension that can detect scams on websites, but frankly - they could offer detection services (for a cost) to Google before they further erode trust in the ads in their search results.</p>
<p>More on this from Ars: <a target="_blank" href="https://arstechnica.com/security/2025/06/tech-support-scammers-inject-malicious-phone-numbers-into-big-name-websites/">https://arstechnica.com/security/2025/06/tech-support-scammers-inject-malicious-phone-numbers-into-big-name-websites/</a></p>
]]></content:encoded></item><item><title><![CDATA[What are your legacy norms?]]></title><description><![CDATA[Revisiting prior decisions can be tough and effectively doing so often requires you to notice the page margins. Not literal page margins, but like page margins, functionality that exists because the original constraints required it. Page margins exis...]]></description><link>https://blog.drewsk.tech/what-are-your-legacy-norms</link><guid isPermaLink="true">https://blog.drewsk.tech/what-are-your-legacy-norms</guid><dc:creator><![CDATA[Drew Skwiers-Koballa]]></dc:creator><pubDate>Mon, 16 Jun 2025 21:17:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749520023626/e85c401c-0269-4bee-bfa5-36140d1856d8.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Revisiting prior decisions can be tough and effectively doing so often requires you to notice the page margins. Not literal page margins, but like page margins, functionality that exists because the original constraints required it. Page margins exist in word processors because printers have limited ability to reach the edge of the page. Now we’re rarely taking our digital content out of Word/Pages and into a piece of paper, but we continue to open files in that pre-determined shape and edge settings.</p>
<p>I digress. Noticing those page margins, the legacy norms, in your established product is especially important when you’re looking to bring new life into it with an evolution or revolution. I might be the only person that will complain about page margins, but I’ll bet if we looked more closely at how we could present content in word processors without the default margins we might come up with a whole new way to interact with digital documents.</p>
<p>How do you pick up on your legacy norms? I start by naming the assumptions. This can be difficult to do thoroughly sometimes because there just aren’t APIs for all the things that are assumed. Why on earth would I build a property in to set Think about all those videos of parents having their kids tell them how to make a peanut butter sandwich while they smear peanut butter on their hand and then smash the bread because the directions coming from the kid just have things assumed baked into them. We are all those kids, and we have to look at our products like the parents to notice the legacy norms in the form of assumptions.</p>
<p>Watching new users experience your product can be a way to unearth the legacy norms by observing their points of friction, but there’s often poison in the well. If your product was originally designed to be consistent with other pieces of an ecosystem, that benefit also reinforces the legacy norms. To some people, a floppy disk is a 1MB portable storage device; to others, it’s the save icon and nothing more.</p>
<p>Why do we even care about the legacy norms? Well, like I mentioned with how new users experience your product, the legacy norms might be sources of friction on a scale of speed bump to completely blocked. Even for a functionally perfect product, the legacy norms could be opportunities for enhancement. Legacy isn’t meant to have a negative connotation here, standing the test of time shouldn’t be taken for granted. However, there’s a fine line between surviving and continued success.</p>
<h2 id="heading-post-edit-pageless-mode-in-google-docs">Post-edit: pageless mode in Google Docs</h2>
<p>I’d be remiss to not come back when I got a good chuckle from a dialog introducing me to <a target="_blank" href="https://support.google.com/docs/answer/11528737">pageless mode</a> in Google Docs. No matter how normalized a standard becomes, it is still up for evolution.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750615742744/2f38c613-8d98-423b-b5e8-47e31012f306.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Should you install the Apple developer betas?]]></title><description><![CDATA[The Apple developer betas are exciting pre-pre-releases accompanying the spring-to-summer WWDC event (Apple’s World-wide developer conference). For as many years as I can remember (5? 10?) the full OS lineup has had the upcoming releases previewed, e...]]></description><link>https://blog.drewsk.tech/should-you-install-the-apple-developer-betas</link><guid isPermaLink="true">https://blog.drewsk.tech/should-you-install-the-apple-developer-betas</guid><category><![CDATA[macOS]]></category><category><![CDATA[wwdc]]></category><dc:creator><![CDATA[Drew Skwiers-Koballa]]></dc:creator><pubDate>Fri, 13 Jun 2025 01:47:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749782174285/5340f83f-5695-4d91-8086-a43cf7663a62.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The Apple developer betas are exciting pre-pre-releases accompanying the spring-to-summer WWDC event (Apple’s World-wide developer conference). For as many years as I can remember (5? 10?) the full OS lineup has had the upcoming releases previewed, each with their own demo reel and headline-level changes. The trouble with the excitement is that the hype makes people forget that the original target audience of WWDC is developers that build on/for the Apple ecosystem and the previews released at this point aren’t the public betas.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749767093656/318ce249-054b-48e1-b0e3-6fbbe04a9010.png" alt class="image--center mx-auto" /></p>
<p>Public betas are released by Apple later in the summer and still keep you ahead of the (super uncool) general population. For anything that needs a plausible level of stability, the public betas are really where you want to be considering experimenting. Anything sooner is asking for trouble. You might be ok. Or you might have to completely reimage your system.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749767571664/e8007579-631d-4474-a3b5-021bc7a0b2c5.png" alt class="image--center mx-auto" /></p>
<p>In case you enjoy flowcharts, I tried to break down whether you should be installing macOS developer betas - <em>especially the first couple</em> releases:</p>
<pre><code class="lang-mermaid">flowchart TD
        A(["You're on the current OS"])
        A --&gt; B{"Are you a developer?"}
        B --&gt; C["No"]
        B --&gt; D["Yes, I build things"]
        B --&gt; E["Yes, I build things for that platform"]
        C --&gt; F(["Stay on the current OS"])
        D --&gt; F
        E --&gt; G(["Test your app on the developer beta"])
        F --&gt; H["I really don't want to"]
        H --&gt; I(["At least take a backup first"])
</code></pre>
<h2 id="heading-if-you-got-into-trouble">If you got into trouble</h2>
<p>If you tried out the developer beta for macOS and it has not gone well, there’s a few helpful links:</p>
<ul>
<li><p>Here’s how to reinstall macOS - <a target="_blank" href="https://support.apple.com/en-us/102655">https://support.apple.com/en-us/102655</a></p>
</li>
<li><p>Hopefully you took a backup - <a target="_blank" href="https://support.apple.com/en-us/102307">https://support.apple.com/en-us/102307</a></p>
</li>
<li><p>And are ready to reimage from macOS recovery - <a target="_blank" href="https://support.apple.com/en-us/102518">https://support.apple.com/en-us/102518</a></p>
</li>
</ul>
<p>Similar processes are available for iPadOS and iOS, although they can be more tricky in the age of MFA tokens.</p>
<h2 id="heading-snark-and-cynicism-aside">Snark and cynicism aside</h2>
<p>Beta software is fun, and that excitement is why half of us got into building things with computers in the first place. I don’t use my iPad all the time, so I’m going to grab the developer beta there, but I have to remember that I’ve ended up reverting about every other year until the public beta.</p>
<p>This weekend I’m going to backup my personal laptop and give the macOS 26 developer beta a spin because I can’t wait to give the <a target="_blank" href="https://github.com/apple/containerization">containerization package</a> a try.</p>
]]></content:encoded></item><item><title><![CDATA[Build and deploy a SQL project for multiple SQL versions]]></title><description><![CDATA[Sometimes you need to deploy the same database project to both a SQL Server instance and an Azure SQL Database. Your test infrastructure could be SQL Server in containers while your application runs in Azure SQL Database, or your application is could...]]></description><link>https://blog.drewsk.tech/build-and-deploy-a-sql-project-for-multiple-sql-versions</link><guid isPermaLink="true">https://blog.drewsk.tech/build-and-deploy-a-sql-project-for-multiple-sql-versions</guid><category><![CDATA[dacpac]]></category><category><![CDATA[SQL Projects]]></category><category><![CDATA[azure-devops]]></category><dc:creator><![CDATA[Drew Skwiers-Koballa]]></dc:creator><pubDate>Thu, 01 May 2025 07:00:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745893326798/b3a395a8-a814-4d57-86c3-78dd5e71d943.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Sometimes you need to deploy the same database project to both a SQL Server instance and an Azure SQL Database. Your test infrastructure could be SQL Server in containers while your application runs in Azure SQL Database, or your application is could simply be used on different infrastructure per tenant. With the consistency of T-SQL syntax across the SQL engines and smaller differences between versions, this isn’t an uncommon scenario but you do want to be thoughtful and careful about how you approach it. In this article I’ll look briefly at how you can deploy a SQL project for multiple versions of SQL, like SQL Server 2019 and Azure SQL Database, from Azure DevOps pipelines.</p>
<h2 id="heading-deploy-the-same-dacpac-to-multiple-platforms-with-pallowincompatibleplatformtrue">Deploy the Same DACPAC to Multiple Platforms with <code>/p:AllowIncompatiblePlatform=true</code></h2>
<p>When building your database project, the generated DACPAC is targeted for a specific SQL platform (eg SQL Server 2019). If you try to deploy that DACPAC to a different type of SQL server (eg Azure SQL Database), you'll likely hit a platform mismatch error when the deployment starts. To override this platform match check, you can use the <code>/p:AllowIncompatiblePlatform=true</code> option with SqlPackage or your deployment tasks. This tells the deployment tools to skip platform validation and proceed with the deployment and assumes everything in the DACPAC is compatible with wherever it is being deployed to.</p>
<p>In Azure DevOps, if you're using the <code>SqlAzureDacpacDeployment</code> task, you add this extra property to the <code>AdditionalArguments</code> field:</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">SqlAzureDacpacDeployment@1</span>
  <span class="hljs-attr">inputs:</span>
    <span class="hljs-string">...</span>
    <span class="hljs-attr">AdditionalArguments:</span> <span class="hljs-string">'/p:AllowIncompatiblePlatform=true'</span>
</code></pre>
<p>You can also manually call SqlPackage in a script step if needed — which gives you full control, especially if you're deploying to a SQL Server VM (the Azure task doesn't always work there).</p>
<h2 id="heading-build-the-project-twice-to-validate-syntax-for-both-platforms">Build the Project Twice to Validate Syntax for Both Platforms</h2>
<p>Rather than relying on deploy-time compatibility settings, another option is to build the project separately for each target platform. This adds additional time and complexity to the pipeline, but is technically safer as a completed build won’t error out mid-deploy because the SQL project build validations check its compatibility with the <em>target platform</em>. This way, you catch platform-specific syntax issues before deployment.</p>
<p>You can do this by specifying a different DSP (Database Schema Provider, aka target platform) at build time:</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">MSBuild@1</span>
  <span class="hljs-attr">inputs:</span>
    <span class="hljs-attr">solution:</span> <span class="hljs-string">'**/*.sqlproj'</span>
    <span class="hljs-attr">msbuildArgs:</span> <span class="hljs-string">'/p:DSP=Sql150DatabaseSchemaProvider'</span> <span class="hljs-comment"># For SQL Server 2019</span>
</code></pre>
<p>You don’t need to specify a DSP property for the build step that matches the target platform in the SQL project file, this is only for temporarily overriding it at build time. All of the DSP values can be found in the SQL projects documentation - <a target="_blank" href="https://learn.microsoft.com/en-us/sql/tools/sql-database-projects/concepts/target-platform?view=sql-server-ver16&amp;pivots=sq1-command-line">https://learn.microsoft.com/en-us/sql/tools/sql-database-projects/concepts/target-platform?view=sql-server-ver16&amp;pivots=sq1-command-line</a></p>
<h2 id="heading-a-quick-plug-for-other-pipeline-fundamentals">A quick plug for other pipeline fundamentals</h2>
<ol>
<li><p>The pipeline has to have network access to the database. One benefit of the <code>SqlAzureDacpacDeployment</code> task is that it can temporarily add/remove a firewall rule for Azure SQL Database, but you may end up deploying through a self-hosted agent within your private network.</p>
</li>
<li><p><a target="_blank" href="https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/reference/publish-pipeline-artifact-v1?view=azure-pipelines">Archive</a> (publish) your DACPAC files, since they’re build artifacts and you may want to retrieve them at a later date.</p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Connecting to SQL Server on macOS from a Windows VM]]></title><description><![CDATA[There are a few instances where you want to run SQL Server on macOS and interact with it from a windows VM. As someone that works within the world of SQL Server/Azure SQL, the use of SQL Server Management Studio (SSMS) comes to mind immediately - for...]]></description><link>https://blog.drewsk.tech/connecting-to-sql-server-on-macos-from-a-windows-vm</link><guid isPermaLink="true">https://blog.drewsk.tech/connecting-to-sql-server-on-macos-from-a-windows-vm</guid><category><![CDATA[macOS]]></category><category><![CDATA[SQL Server]]></category><category><![CDATA[parallels]]></category><dc:creator><![CDATA[Drew Skwiers-Koballa]]></dc:creator><pubDate>Mon, 15 Jan 2024 20:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745591380621/2ae70b3d-3402-42c6-9c7e-c3b4c32fe0d5.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There are a few instances where you want to run SQL Server on macOS and interact with it from a windows VM. As someone that works within the world of SQL Server/Azure SQL, the use of SQL Server Management Studio (SSMS) comes to mind immediately - for the tasks you can't complete in Azure Data Studio on macOS directly. Another adjacent (and generally unfortunate scenario) is when work with .NET Framework is required, and for that, you <em>need</em> Windows. This article will step through the network information needed to connect from a Windows VM on macOS to a SQL Server container.</p>
<h2 id="heading-a-few-basics">A few basics</h2>
<p>There are a few instances where you want to run SQL Server on macOS and interact with it from a windows VM. As someone that works within the world of SQL Server/Azure SQL, the use of SQL Server Management Studio (SSMS) comes to mind immediately - for the tasks you can't complete in Azure Data Studio on macOS directly. Another adjacent (and generally unfortunate scenario) is when work with .NET Framework is required, and for that, you <em>need</em> Windows. This article will step through the network information needed to connect from a Windows VM on macOS to a SQL Server container.</p>
<p><img src="https://drewsktech.blob.core.windows.net/images/2024/January/container-create.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-vm-networking">VM networking</h2>
<p><a target="_blank" href="https://kb.parallels.com/4948">Shared networking</a> is the recommended (and default) network setting in Parallels. In this network layout, your macOS machine is automatically sharing its network connection with the VM and this is usually sufficient for VM use. The other insight we need to connect between macOS and the Windows VM is knowledge of the <em>virtual subnet</em> (network) between the macOS host and the VM.</p>
<p>The fastest way I've found to get my network info for the VM is within the VM itself, using <code>ipconfig</code> from Windows terminal. From its output, I note the VM's IP address and its gateway address. On my machine, the VM has the address <strong>10.211.55.3</strong> and the gateway is <strong>10.211.55.1</strong>.</p>
<p><img src="https://drewsktech.blob.core.windows.net/images/2024/January/shared-network.png" alt class="image--center mx-auto" /></p>
<p>Conceptually, our macOS machine is playing 2 roles in the VM network - both the gateway and as a participant in the network. The SQL Server container running on our macOS host is available to the Windows VM at the IP address assigned to our host as a participant in the network.</p>
<p><img src="https://drewsktech.blob.core.windows.net/images/2024/January/network-vm.png" alt class="image--center mx-auto" /></p>
<p>Since the gateway address is x.x.x.1 and the VM is assigned x.x.x.3, I'm going to make an educated guess that my host machine is available at x.x.x.2. All that IP address information I gathered by running <code>ipconfig</code> from the Windows VM.</p>
<h2 id="heading-test-the-connection">Test the connection</h2>
<p>Using any SQL client application on the Windows VM will allow me to test the connectivity to the SQL container. This moment is a good one to fire up <a target="_blank" href="https://aka.ms/ssms">SSMS</a> in the VM. I'll use the address <strong>10.211.55.2</strong> as my SQL server address.</p>
<p><img src="https://drewsktech.blob.core.windows.net/images/2024/January/ssms-test.png" alt class="image--center mx-auto" /></p>
<p><img src="https://drewsktech.blob.core.windows.net/images/2024/January/connected-ssms.png" alt class="image--center mx-auto" /></p>
<p>If you encounter difficulties gaining connectivity, you may need to adjust firewall settings in macOS. If your firewall is enabled, use the <strong>Options</strong> button to allow specific connections.</p>
<h2 id="heading-recap">Recap</h2>
<p>While there's a lot of the development process that you can accomplish from just about any machine, there are still some specific instances where you need that Windows VM. With the default shared network setting, your macOS host has an address assigned to it that you can determine and use to connect from the Windows VM to a container running on macOS. In the setup on my machine, the Windows VM was assigned <strong>10.211.55.3</strong> and the macOS host (and SQL Server container) was available at <strong>10.211.55.2</strong>.</p>
]]></content:encoded></item><item><title><![CDATA[Using Python and Azure Functions to send data from Azure SQL Database]]></title><description><![CDATA[Using Python and Azure Functions to send data from Azure SQL Database
When building applications on Azure SQL, one of the most flexible ways to send data from your database to other systems is to use Azure Functions. Azure Functions are serverless fu...]]></description><link>https://blog.drewsk.tech/using-python-and-azure-functions-to-send-data-from-azure-sql-database</link><guid isPermaLink="true">https://blog.drewsk.tech/using-python-and-azure-functions-to-send-data-from-azure-sql-database</guid><category><![CDATA[Azure Functions]]></category><category><![CDATA[Python]]></category><category><![CDATA[Azure SQL Database]]></category><dc:creator><![CDATA[Drew Skwiers-Koballa]]></dc:creator><pubDate>Sun, 26 Feb 2023 20:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745536644001/326b22c8-1be9-4206-8846-4ab003c10351.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-using-python-and-azure-functions-to-send-data-from-azure-sql-database">Using Python and Azure Functions to send data from Azure SQL Database</h1>
<p>When building applications on Azure SQL, one of the most flexible ways to send data from your database to other systems is to use Azure Functions. Azure Functions are serverless functions that can be triggered by a variety of events, including HTTP requests, timers, and <a target="_blank" href="https://aka.ms/sqltrigger">Azure SQL Database changes</a>. In this article, we will discuss how to send data from an Azure SQL Database to an FTP server and API endpoints using Azure Functions. The complete sample code for this article is available on <a target="_blank" href="https://github.com/dzsquared/sqlbindings-python-datatransfer">GitHub</a>.</p>
<blockquote>
<p>This post is syndicated from <a target="_blank" href="https://devblogs.microsoft.com/azure-sql/using-python-and-azure-functions-to-send-data-from-azure-sql-database/">https://devblogs.microsoft.com/azure-sql/using-python-and-azure-functions-to-send-data-from-azure-sql-database/</a></p>
</blockquote>
<h2 id="heading-get-data-from-azure-sql-database-in-azure-functions">Get data from Azure SQL Database in Azure Functions</h2>
<p>With <a target="_blank" href="https://aka.ms/sqlbindings">Azure SQL bindings for Azure Functions</a> we can easily retrieve data from an Azure SQL Database in an Azure Function, leaving the boilerplate code of connecting to the database and executing queries to the Azure Functions runtime. When our solution needs to operate on a schedule, such as every morning, we can use the <a target="_blank" href="https://learn.microsoft.com/azure/azure-functions/functions-bindings-timer?tabs=in-process&amp;pivots=programming-language-python">timer trigger</a> to start the Azure Function. Python Azure Functions are composed of a <code>function.json</code> file and an <code>__init__.py</code> file. The <code>function.json</code> file is where we define the function trigger and input/output bindings and the Python code is based in the <code>__init__.py</code> file. Querying Azure SQL Database with an Azure Function is as simple as adding an input binding to the <code>function.json</code> file:</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"name"</span>: <span class="hljs-string">"products"</span>,
    <span class="hljs-attr">"type"</span>: <span class="hljs-string">"sql"</span>,
    <span class="hljs-attr">"direction"</span>: <span class="hljs-string">"in"</span>,
    <span class="hljs-attr">"commandText"</span>: <span class="hljs-string">"SELECT [ProductID],[Name],[ProductModel],[Description] FROM [SalesLT].[vProductAndDescription]"</span>,
    <span class="hljs-attr">"commandType"</span>: <span class="hljs-string">"Text"</span>,
    <span class="hljs-attr">"connectionStringSetting"</span>: <span class="hljs-string">"SqlConnectionString"</span>
}
</code></pre>
<p>Once we have the input binding defined, we can use the parameter <code>products</code> in our function code to access the data returned by the query. The <code>products</code> parameter is a list of <code>SqlRow</code> objects, which are similar to Python dictionaries.</p>
<p>The Azure SQL input bindings for Azure Functions can run any SQL query, including stored procedures. The <code>commandText</code> property is where we define the SQL query to run. In the example above, we're selecting four columns from the view <code>SalesLT.vProductAndDescription</code>. The <code>connectionStringSetting</code> property is where we define the name of the app setting that contains the connection string to the Azure SQL Database. <a target="_blank" href="https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-python">Additional examples</a> are available which show using additional features, including parameters and executing SQL stored procedures.</p>
<p>Throughout the sample we have several values in the Azure Functions application settings, including the Azure SQL connection string, the API endpoint URL, and the FTP server login information. Keeping this sort of sensitive information out of code is a best practice that you’ll want to follow.</p>
<h2 id="heading-sending-data-to-an-api-endpoint">Sending data to an API endpoint</h2>
<p>To send data to an API endpoint, we will use the <code>requests</code> library for it's simplicity and the built-in <code>json</code> library. With the <code>requests</code> library, we can easily send a <code>POST</code> request to an API endpoint with the data we want to send. The SQL input binding sends data as a list of <code>SqlRow</code> objects, which are similar to Python dictionaries. We can use the <code>json</code> library to serialize the data into a JSON string, which is the format that most APIs expect.</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">main</span>(<span class="hljs-params">everyDayAt5AM: func.TimerRequest, products: func.SqlRowList</span>) -&gt; <span class="hljs-keyword">None</span>:</span>
    logging.info(<span class="hljs-string">'Python timer trigger function started'</span>)
    <span class="hljs-comment"># convert the SQL data to JSON in memory</span>
    rows = list(map(<span class="hljs-keyword">lambda</span> r: json.loads(r.to_json()), products))

    <span class="hljs-comment"># get the API endpoint from app settings</span>
    api_url = os.environ[<span class="hljs-string">'API_URL'</span>]

    <span class="hljs-comment"># send the data to the API</span>
    response = requests.post(api_url, json=rows)
    <span class="hljs-comment"># check for 2xx status code</span>
    <span class="hljs-keyword">if</span> response.status_code // <span class="hljs-number">100</span> != <span class="hljs-number">2</span>:
        logging.error(<span class="hljs-string">f"API response: <span class="hljs-subst">{response.status_code}</span> <span class="hljs-subst">{response.reason}</span>"</span>)
    <span class="hljs-keyword">else</span>:
        logging.info(<span class="hljs-string">f"API response: <span class="hljs-subst">{response.status_code}</span> <span class="hljs-subst">{response.reason}</span>"</span>)
</code></pre>
<p>In our Azure Function we check the API response status code to make sure the request was successful. If the status code is not in the 2xx range, we log an error. If the status code is in the 2xx range, we log a success message. By logging an error, we can monitor the Azure Functions logs to see if there are any issues with calling the API endpoint.</p>
<p>That’s it! Those ~10 lines of Python are all we need to run a query against our Azure SQL Database and send that data to the endpoint we set in the Azure Functions app settings.</p>
<h2 id="heading-sending-data-to-an-ftp-server">Sending data to an FTP server</h2>
<p>While we formatted the data as JSON to send to an API endpoint, we may want to send our data to an FTP server as a CSV file. By using a package like <code>pandas</code>, we can quickly convert the data to a comma-separated format.</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">main</span>(<span class="hljs-params">everyDayAt5AM: func.TimerRequest, products: func.SqlRowList</span>) -&gt; func.HttpResponse:</span>
    logging.info(<span class="hljs-string">'Python HTTP trigger function processed a request.'</span>)
    filename = <span class="hljs-string">"products.txt"</span>
    filesize = <span class="hljs-number">0</span>

    <span class="hljs-comment"># convert the SQL data to comma separated text</span>
    product_list = pandas.DataFrame(products)
    product_csv = product_list.to_csv(index=<span class="hljs-literal">False</span>)
</code></pre>
<p>Python has a built-in library, <code>ftplib</code>, that can interact with FTP servers. After retrieving the FTP server information from the app settings, we can connect to the FTP server and upload the data. Instead of writing the data to a local file before uploading to the FTP server, we can use the <code>BytesIO</code> class to handle the binary data to memory.</p>
<pre><code class="lang-python">    datatosend = io.BytesIO(product_csv.encode(<span class="hljs-string">'utf-8'</span>))

    <span class="hljs-comment"># get FTP connection details from app settings</span>
    FTP_HOST = os.environ[<span class="hljs-string">'FTP_HOST'</span>]
    FTP_USER = os.environ[<span class="hljs-string">'FTP_USER'</span>]
    FTP_PASS = os.environ[<span class="hljs-string">'FTP_PASS'</span>]

    <span class="hljs-comment"># connect to the FTP server</span>
    <span class="hljs-keyword">try</span>:
        <span class="hljs-keyword">with</span> ftplib.FTP(FTP_HOST, FTP_USER, FTP_PASS, encoding=<span class="hljs-string">"utf-8"</span>) <span class="hljs-keyword">as</span> ftp:
            logging.info(ftp.getwelcome())
            <span class="hljs-comment"># use FTP's STOR command to upload the data</span>
            ftp.storbinary(<span class="hljs-string">f"STOR <span class="hljs-subst">{filename}</span>"</span>, datatosend)
            filesize = ftp.size(filename)
            ftp.quit()
    <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
        logging.error(e)

    logging.info(<span class="hljs-string">f"File <span class="hljs-subst">{filename}</span> uploaded to FTP server. Size: <span class="hljs-subst">{filesize}</span> bytes"</span>)
</code></pre>
<h2 id="heading-wrapping-up">Wrapping up</h2>
<p>With Azure Functions we have a low-overhead and flexible way to build application components and the Azure SQL bindings make it easy to retrieve data from Azure SQL Database. In this article, we took a brief look at an approach to sending data from an Azure SQL Database to an FTP server and API endpoints with Python in Azure Functions. If you'd like to dive into this sample further, the code is available on <a target="_blank" href="https://github.com/dzsquared/sqlbindings-python-datatransfer">GitHub</a>.</p>
]]></content:encoded></item><item><title><![CDATA[OMSCS Research Options]]></title><description><![CDATA[The Georgia Tech online masters of science in computer science has been a groundbreaking endeavour to bring an MS CS program to students at a massive scale.

In five years, the program has received over 25,000 applications and enrolled more than 10,0...]]></description><link>https://blog.drewsk.tech/omscs-research-options</link><guid isPermaLink="true">https://blog.drewsk.tech/omscs-research-options</guid><category><![CDATA[omscs]]></category><category><![CDATA[georgia-tech]]></category><dc:creator><![CDATA[Drew Skwiers-Koballa]]></dc:creator><pubDate>Mon, 02 Jan 2023 20:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745616798843/cd4c100b-7ed0-443b-9da2-ad0bffc64c7e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The Georgia Tech online masters of science in computer science has been a groundbreaking endeavour to bring an MS CS program to students at a massive scale.</p>
<blockquote>
<p>In five years, the program has received over 25,000 applications and enrolled more than 10,000 students (including those who have graduated), all working their way toward the same Georgia Tech M.S. in Computer Science as their on-campus counterparts. - https://omscs.gatech.edu/explore-oms-cs</p>
</blockquote>
<p>Despite the use of online communities, legions of teaching assistants, and other techniques to expand the reach of a traditional masters of science program to thousands of students, there are still opportunities for students to be deeply engaged with university resources. One of these categories of opportunities is research and independent projects, which I was fortunate to experience in three ways during my time in the program.</p>
<h2 id="heading-education-technology-foundations-cs6460">Education Technology Foundations (CS6460)</h2>
<p>Previous review comments <a target="_blank" href="https://drewsk.tech/omcs-graduation-and-recap#heading-education-technology-foundations-cs6460">here</a>.</p>
<p>I'm starting with the EdTech course because the barrier to entry is low and the pathway to success is well guided. Registration for EdTech (CS6460) is open to OMSCS students without additional forms and even if you don't have a project in mind before the course, the first few weeks of the course involve exploration of academic literature on the intersection of education and technology. If you do have a project in mind, the first part of the course will help you focus and refine your idea to best fit the remainder of the course.</p>
<p>To be specific about your success in this course, you will agree on an outcome with a TA/mentor along with a planned approach when you decide on your topic. Along the way, adjustments can be made so its in your best interest to be honest with your mentor about challenges you are having or unforeseen barriers. In many cases your mentor will have experience in or adjacent to the area you're working on and can provide helpful suggestions. Much of this is similar to working with a PI (principal investigator) in a research group except the EdTech course takes a more structured approach to the project plan than most research groups.</p>
<p>After EdTech, you can optionally take your project further through a few avenues:</p>
<ul>
<li><p>participate in the student showcase, which is an online forum format of student presentations at the end of each semester</p>
</li>
<li><p>continue working on the project independently, which is especially applicable if you are building an open source component</p>
</li>
<li><p>approach a professor working in a related area and inquire if they're willing to mentor you for continued work through CS8903 (see below), a number of EdTech projects will fall in <a target="_blank" href="https://lucylabs.gatech.edu/david-joyner/">Dr. Joyner's areas of interest</a></p>
</li>
</ul>
<p>In my opinion, unless you do 2-3x more work during the EdTech course than is required for the course, you are unlikely to have a project ready for publication in academic journals at the end of this class. However, if you continue the project past the course you may be on the way to publication. Regardless, EdTech can be a tremendously rewarding adventure into working on an individual or team project focused on an area of your interest.</p>
<h2 id="heading-vertically-integrated-projects-vip">Vertically Integrated Projects (VIP)</h2>
<p>Previous review comments <a target="_blank" href="https://drewsk.tech/omcs-graduation-and-recap#heading-vertically-integrated-project-big-data-and-quantum-mechanics">here</a>.</p>
<p>OMSCS students are eligible to apply for VIP teams, an email is usually sent once a semester to remind students but you can begin investigating options at any time. <a target="_blank" href="https://www.vip.gatech.edu/online-mscs-students">Applications</a> are open around the time of phase 1 registration, so you need to be thinking ahead if at all possible.</p>
<p>My <em>suggestion</em> for participating in VIP is to not only complete the application to a team, but if you have related experience (professional or academic) or compelling interest in specific topic that you also communicate with the professor leading the team via email. Do enough background research into the work the team is doing to have engaging questions.</p>
<p>An important component of VIP is the team component. There is a high likelihood that you will be collaborating with other students, undergraduate and/or graduate students. This makes VIP an excellent opportunity to learn from others but also to be a leader and help other students. OMSCS students can be highly desireable to VIP teams because we come to Georgia Tech with diverse backgrounds and experiences, exactly the kind of innovation that vertically integrated projects are seeking.</p>
<p>My experience with VIP involved relating previous undergraduate and graduate research to the VIP project components, as well as areas I'm interested in learning more about. The VIP team I joined had a branch that required some development experience that I was able to work on where my previous experience was helpful in gaining context quickly and getting started.</p>
<h2 id="heading-special-projects-cs8903">Special Projects (CS8903)</h2>
<p>Previous review comments <a target="_blank" href="https://drewsk.tech/omcs-graduation-and-recap#heading-special-problems-cs8903">here</a>.</p>
<p>Special projects does not change your OMSCS degree pathway from being coursework based, but it does create a significant opportunity to do research for credit (3 or 9 credits) that can be part of your coursework credits. These are credits that count towards your "free electives", no matter what your specialization is. These statements are made based on the current content of https://omscs.gatech.edu/program-info/specializations, you will check with your academic advisor before pursuing.</p>
<h3 id="heading-understanding-academic-research">Understanding academic research</h3>
<p>Academic faculty are highly motivated to conduct research, where there success is measured mostly by their publications but also by the success of their team (postdocs, graduate students, and extraordinary undergrads). The phrase "publish or perish" is invoked as a reminder that research <em>must go on</em>. This is not to suggest that all academic faculty aren't interested in your success as a student, but it's really important to understand some of these dynamics if you want to successfully participate in academic research. First off - I'll define your success as an OMSCS student in CS8903:</p>
<ol>
<li><p>Making forward and novel progress on a project of sufficient difficulty that it makes for engaging conversation with people on a deep technical level</p>
</li>
<li><p>Completing agreed upon checkpoints to complete the CS8903 credit hours. If you were a full time research student, you would be trying to meet expectations to continue your "employment" in the group</p>
</li>
<li><p>Optionally, your work is published or included in a publication by the research group</p>
</li>
</ol>
<p>Research groups are generally structured horizontally into subgroups of people around projects under the PI (principal investigator, or faculty member). A research group around a PI can contain post-doctoral researchers (postdocs), graduate/PhD students, and undergraduate students. In larger groups, the more senior folks (postdocs) can take on a good deal of mentorship of student researchers. Newer faculty members will have smaller groups that are often more tight-knit in both the vertical and horizontal direction.</p>
<h3 id="heading-getting-started-with-cs8903">Getting Started with CS8903</h3>
<p>The process for starting CS8903 is a bit more involved than the previous options, but the possibilities are nearly endless! There are <a target="_blank" href="https://www.cc.gatech.edu/facts-and-rankings">over 100 academic faculty in GT Computing</a> and all of their research areas are potential content for your work for CS8903. To enroll in CS8903 and work on a faculty member's research project, you will need to:</p>
<ol>
<li><p>Educate yourself on their current work, including publications and opportunities for more advancement in that area. You may want to familiarize yourself with the GT Library to get access to some of the content.</p>
</li>
<li><p>Reach out (via email) to introduce yourself and/or complete any research interest forms they have linked on their website (not all faculty have these). Similarly to the introduction email I recommend for VIP (although now required), you're trying to make a good impression - you're smart, inquisitive, hard working, etc.</p>
</li>
<li><p>Ask the professor if they're willing to supervise you for CS8903 where you'd work on their project <em>{fill in the blank here}</em> based on your interest/experience/knowledge/unique charm. It's ideal to have a virtual meeting with the professor mid-semester to discuss their work and brainstorm what you might work on. Look at the CS8903 permit form <em>before</em> this meeting.</p>
</li>
<li><p>If the professor agrees, complete and submit the <a target="_blank" href="https://www.cc.gatech.edu/graduate-forms-procedures">CS8903 permit form</a>, including a statement of research.</p>
</li>
<li><p>Register for CS8903 during registration and setup regularly meetings with your advisor. If their research group does virtual or hybrid meetings, you may want to join those.</p>
</li>
</ol>
<blockquote>
<p>The statement of research should be two or three pages and should include the following:</p>
<ul>
<li><p>Problem Statement or Project Goals</p>
</li>
<li><p>Solution Proposal or Approach</p>
</li>
<li><p>Schedule of Work</p>
</li>
<li><p>Expected Results or Outcome (Deliverables)</p>
</li>
</ul>
<p>School of Computer Science Special Problems (CS 8903) Permit Form, https://www.cc.gatech.edu/graduate-forms-procedures</p>
</blockquote>
<p>I got started with CS8903 by digging further into the "Databases" <a target="_blank" href="https://www.cc.gatech.edu/research-areas">area of research</a> and looking through all the faculty at Georgia Tech I could find that work with <a target="_blank" href="https://db.cc.gatech.edu/">databases</a>. <a target="_blank" href="https://faculty.cc.gatech.edu/~jarulraj/">Prof. Arulraj</a> is involved in a few projects, some newer and others carrying over from their original PhD work. Their research group is relatively small - which is a pro or con depending on a number of other factors. One of those projects was of significant interest to me, so I read no fewer than 10 papers of theirs or referenced by them before reaching out to ask about future opportunities.</p>
<h2 id="heading-summary">Summary</h2>
<p>This article isn't necessarily exhaustive and there may be other ways to conduct research in OMSCS, including reaching out to a professor similarly to as would be done for CS8903 but instead of under the premise of you doing research work "for free". As the OMSCS program evolves over time, additional opportunities may become available for students. In my experience, all three of VIP, EdTech, and CS8903 options were readily available to customize the work done for OMSCS to fit my area of interest. Know your strengths and don't be afraid to explore!</p>
]]></content:encoded></item><item><title><![CDATA[OMSCS Graduation and Recap]]></title><description><![CDATA[The online Masters of Computer Science program at Georgia Tech is completely online and designed for working, part-time students. I applied to OMSCS in 2015 when I was concerned about what I saw as a plateau ahead in my career. At the time I worked f...]]></description><link>https://blog.drewsk.tech/omcs-graduation-and-recap</link><guid isPermaLink="true">https://blog.drewsk.tech/omcs-graduation-and-recap</guid><category><![CDATA[omscs]]></category><category><![CDATA[georgia-tech]]></category><dc:creator><![CDATA[Drew Skwiers-Koballa]]></dc:creator><pubDate>Sat, 31 Dec 2022 20:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745616774386/c5086044-d020-4325-bca7-e7d355a6efcc.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The online Masters of Computer Science program at Georgia Tech is completely online and designed for working, part-time students. I applied to OMSCS in 2015 when I was concerned about what I saw as a plateau ahead in my career. At the time I worked for a smaller company doing a little bit of everything in technology and development. With a MS in chemistry and a minor in computer science under my belt I was confident that I would be able to succeed in OMSCS and it would contribute to my part of my aspiration to be a better developer and work as software engineer in the future.</p>
<p>I received the decision notification email while driving from Missouri back home to Minnesota. It was unpleasantly cold and windy at a gas station in Iowa as I navigated the applicant portal from my iPhone. Also relatively unpleasant was the news that I had not been accepted to OMSCS.</p>
<p>In December 2022 (this month), I graduated from OMSCS. In this post I'm going to talk just a bit about what I did between being rejected in 2015 and being admitted to OMSCS as well as review some pieces of OMSCS.</p>
<h2 id="heading-pre-omscs">Pre-OMSCS</h2>
<p>I'm not going to speculate about the differences in my applications and the applicant pools between 2015 and 2020, but I do want to talk about how I was a different person going into OMSCS because of the additional experiences I did have in that time.</p>
<ul>
<li><p>I developed a few skills for Amazon's Echo/Alexa devices, and as a part of it started thinking more developer experiences and platform integration. I even took a day off once to go to an Alexa Dev event in Minneapolis "for fun".</p>
</li>
<li><p>I got involved in professional organizations around the systems I used at work (SQL Server, Dynamics SL). With SQL Server I was able to get back to speaking/teaching and learned a whole lot from the great folks involved with SQL Saturday MN (PASSMN). With Dynamics SL I was able to learn more about the depths of the product in organizing content for conferences and eventually served on the Board of Directors.</p>
</li>
<li><p>I took a few <a target="_blank" href="/2017/03/22/oh-hey-data-platform-mcse-2017/">certification exams</a> to solidify and validate technical knowledge.</p>
</li>
<li><p>I completed two professional certificate programs through Coursera focused on <a target="_blank" href="https://www.coursera.org/specializations/product-management">product management</a> and <a target="_blank" href="https://www.coursera.org/specializations/business-strategy">business strategy</a>.</p>
</li>
<li><p>I created a few extensions for Azure Data Studio.</p>
</li>
</ul>
<p>To be honest, I wasn't planning to re-apply for OMSCS. It was in early 2020 when I was looking to apply for new roles (landing in SQL experiences at Microsoft) that I decided to go after OMSCS again. I figured if one avenue didn't work out, maybe the other would.</p>
<h2 id="heading-omscs-review">OMSCS Review</h2>
<p>I worked on the <a target="_blank" href="https://omscs.gatech.edu/specialization-computing-systems">"Computing Systems" specialization</a> and over 2.5 years completed the requirements for matriculation (2020-2022). COVID-19 restrictions were in full force during the early semesters - which had its benefits at times - but we had just moved to Washington and I had just started a new job. In later semesters all 3 of our dogs passed away. There are some colleges that have hybrid or online MS CS programs that still have significant synchronous or on-campus components, but Georgia Tech's OMSCS is fully-centered in the online campus.</p>
<p>Despite all the daily stresses and life events happening during OMSCS, my key to success was repeatedly setting aside everything else to focus on course content for hours every week. For some classes this might be in the range of 10 hours, for others it was 20+ hours. Given the demands of life, I am incredibly fortunate to have a supportive spouse who made a number of sacrifices over that span.</p>
<p>There are some realities of a graduate program that can be jarring to the student, especially if they're expecting more of a bachelors degree:</p>
<ul>
<li><p>the pace at which material is covered requires individual learning - know how you learn topics</p>
</li>
<li><p>the topics/subjects are provided, but the materials that you need to master it may not be explicitly provided - know how to look for books, papers, videos</p>
</li>
<li><p>the background of each student varies widely and some folks grasp topics faster than others - know how to leverage study groups or ask peers for advice</p>
</li>
</ul>
<h3 id="heading-courses">Courses</h3>
<h4 id="heading-fall-2020">Fall 2020</h4>
<p>The common advice for the first semester is to take only a single class to allow you to adjust to graduate school and avoid becoming overwhelmed. I didn't listen and added the VIP project course, but I also didn't really read course reviews yet either.</p>
<h5 id="heading-vertically-integrated-project-big-data-and-quantum-mechanics">Vertically Integrated Project (Big Data and Quantum Mechanics):</h5>
<p>⭐️⭐️⭐️⭐️_ (4/5 stars)</p>
<p>The <a target="_blank" href="https://www.vip.gatech.edu/vip-vertically-integrated-projects-program">Vertically Integrated Projects (VIP)</a> program is a cross-functional research initiative at Georgia Tech where bachelors and masters students can contribute to larger projects alongside faculty and PhD students, where the increased innovation benefits the multidisciplinary projects and students earn course credit for their work. As a student, team assignment is a selective process, but for OMSCS students there are a good number of teams that are seeking CS expertise for their projects. I applied to the the <a target="_blank" href="https://www.vip.gatech.edu/teams/vvi">Medford group</a> to work on their visualization software for ML-assisted DFT (<a target="_blank" href="https://medford-group.github.io/ElectroLens/">ElectroLens</a>). The team was an ideal fit due to my original MS in chemistry where my thesis leveraged DFT and ElectroLens is based in Electron/JavaScript. I can't recommend VIP highly enough, whether you have previous research experience or are looking for more guidance - you get great exposure to academic research through this program.</p>
<h5 id="heading-database-systems-concepts-and-design-cs6400">Database Systems Concepts and Design (CS6400):</h5>
<p>⭐️____ (1/5 stars)</p>
<p><a target="_blank" href="https://omscs.gatech.edu/cs-6400-database-systems-concepts-and-design">This course</a> moved fairly slowly through entity-relationship diagrams, relational algebra/calculus, and report writing for a LAMP stack application. The preface for the textbook used literally recommends the chapters covered by this class for an introductory undergraduate course. I was sorely disappointed at the lack of coverage for index structures, query processing, transactions, or security (all included in the textbook for use at the graduate level). Frankly, I was so unimpressed with the databases course that if I hadn't also done VIP that first semester I would have been tempted to stop the program and focus on promotions at work. I was concerned that every course would have such low quality of content. <em>I didn't find this to be true, and am glad I continued on to better courses.</em></p>
<ul>
<li><p>group project (you can get team members who do nothing)</p>
</li>
<li><p>rudimentary content all the way through</p>
</li>
<li><p>terribly worded exam questions</p>
</li>
</ul>
<h4 id="heading-spring-2021">Spring 2021</h4>
<p>After carefully reading course reviews, I paired an easier course (Computer Networks) with a more time-consuming course (Intro to OS), both well-rated.</p>
<h5 id="heading-computer-networks-cs6250">Computer Networks (CS6250):</h5>
<p>⭐️⭐️⭐️⭐️_ (4/5 stars)</p>
<p>My systems administration background served me well in <a target="_blank" href="https://omscs.gatech.edu/cs-6250-computer-networks">this class</a>, which had a nice balance between exams/quizzes/projects. The content looked at how some networking fundamentals work behind the scenes and the projects were often Python implementations of algorithms. Extra kudos to the instructional design for this class such that students could collaborate on creating test cases and/or unit test frameworks for the projects, which created additional learning opportunities.</p>
<ul>
<li><p>easy-to-moderate assignments</p>
</li>
<li><p>content chopped up into bite-size segments</p>
</li>
</ul>
<h5 id="heading-graduate-intro-to-os-cs6200">Graduate Intro to OS (CS6200):</h5>
<p>⭐️⭐️⭐️⭐️⭐️ (5/5 stars)</p>
<p><a target="_blank" href="https://omscs.gatech.edu/cs-6200-introduction-operating-systems">This course</a> is legendary for its content, project time commitment, and instruction. The depth and pacing of the topics is not overwhelming, but studying for the exams with flash cards was necessary to make sure I was picking up the material sufficiently. On top of preparing for exams with the course videos and reading, this course's projects (3 of them) are each about a 40 hour commitment (varying based on your experience with C, debugging, and acclimating to an existing codebase). I had at least 1 near-breakdown with each project, often at late hours of the night, as my brain stretched to grasp problem space and properly allocate memory. If I had to pick a favorite class from the program - this would have been it. After completing the course I wanted to take <a target="_blank" href="https://omscs.gatech.edu/cs-6210-advanced-operating-systems">advanced operating systems</a> but ultimately didn't fit it in.</p>
<ul>
<li><p>2 moderate difficulty exams that aren't too highly weighted</p>
</li>
<li><p>3 difficult assignments with clear connection to course material</p>
</li>
</ul>
<h4 id="heading-summer-2021">Summer 2021</h4>
<p>Some students opt to have the summer free from school and don't take a class, especially since it's an abbreviated semester. I opted for enrolling in classes that didn't have enormous workloads during the summers.</p>
<h5 id="heading-software-arch-amp-design-cs6310">Software Arch &amp; Design (CS6310):</h5>
<p>⭐️⭐️___ (2/5 stars)</p>
<p>While <a target="_blank" href="https://omscs.gatech.edu/cs-6310-software-architecture-design">this course</a> was full of busy work through in-module quizzes, it also offered a lot of supplementary/required readings. The most notorious is likely the <a target="_blank" href="https://www.digitalocean.com/community/tutorials/gangs-of-four-gof-design-patterns">Gang of Four</a> design patterns, but there many others including some solid nods to <a target="_blank" href="https://en.wikipedia.org/wiki/Robert_C._Martin">Uncle Bob</a>. The course had about as many assignments on how to communicate the software architecture through UML/OCL as designing the architecture, which is valuable but there's a depth of architecture design content available that is interesting to check out. The bulk of the grade for this course came from project assignments, which culminated in a group project component. Similarly to the first semester group project, there was a group member who ultimately shouldn't have received any points because despite multiple people trying to bring them up to speed they were unable to build the project code much less contribute. As long as your group only has 1 or 2 people in this category (out of 4 or 5), you'll be able to pull through and complete the work, but it is a risk. <em>I'm a huge supporter of the program's high acceptance rate but I propose that it is reasonable that a subset of courses are approved for first-semester students that have only individual work to bottleneck students who need to ramp up.</em></p>
<ul>
<li><p>most valuable course content is in supplemental material</p>
</li>
<li><p>group project</p>
</li>
<li><p>"easy course" - highest grade in the program</p>
</li>
</ul>
<h4 id="heading-fall-2021">Fall 2021</h4>
<p>Individually both of these courses are notable for content and instructional style, but they additionally share a significant amount of academic paper reading and writing. To spare myself the context switching, I paired the courses up for a semester where I could stay in the mindset of interaction design and writing.</p>
<h5 id="heading-human-computer-interaction-cs6750">Human-Computer Interaction (CS6750):</h5>
<p>⭐️⭐️⭐️⭐️⭐️ (5/5 stars)</p>
<p><a target="_blank" href="https://omscs.gatech.edu/cs-6750-human-computer-interaction">HCI</a> is a "Dr. Joyner course" with 2 aligning sets of papers focused on the principles learned and applying those as methods towards a project idea. There's a good amount of <a target="_blank" href="https://omscs6750.gatech.edu/spring-2022/required-reading-list/">engaging reading</a>, and it certainly helps that I'm interested in the topic. I still some of those resources from time to time for work as a PM. As tempting as it was to use one of the SQL tools for the project, I ended up proposing improvement to the GPS navigation component of Apple CarPlay, and had a little fun along the way with <a target="_blank" href="https://github.com/dzsquared/cs6750-randomimage">user surveys</a> and scraping product review APIs.</p>
<h5 id="heading-education-technology-foundations-cs6460">Education Technology Foundations (CS6460):</h5>
<p>⭐️⭐️⭐️⭐️_ (4/5 stars)</p>
<p><a target="_blank" href="https://omscs.gatech.edu/cs-6460-educational-technology">EdTech</a> is another "Dr. Joyner course" and is <em>similarly</em> structured to HCI but with a much larger independent project component. You are given a <a target="_blank" href="https://omscs6460.gatech.edu/research-guide/">general direction</a> to investigate a gap in education where technology plays a role, and can conduct empirical research or implement improvements. My focus area, to no one's surprise, was database systems. After scrubbing the literature on higher education for database systems, one of the chasms that forms is automated grading infrastructure for database assignments. I took a swing at the issue with <a target="_blank" href="https://robertdroptablestudents.github.io/">SQLGrader</a>, designed to scale to courses such as the databases course I took in the first semester.</p>
<p>{{&lt; figure src="https://robertdroptablestudents.github.io/assets/diagrams/arch.png" title="SQLGrader architecture overview" &gt;}}</p>
<p>I can't say I continued working on SQLGrader despite there being many opportunities to improve it, as I had also started a project proposal for the next semester.</p>
<h4 id="heading-spring-2022">Spring 2022</h4>
<h5 id="heading-special-problems-cs8903">Special Problems (CS8903):</h5>
<p>I was very fortunate that <a target="_blank" href="https://faculty.cc.gatech.edu/~jarulraj/">Prof. Joy Arulraj</a> was receptive to my inquiry and project proposal, which took tiny step forward with his <a target="_blank" href="https://github.com/jarulraj/sqlcheck">SQLCheck tool</a> for identifying antipatterns in SQL code. The second version of it was fresh off of feature in <a target="_blank" href="http://vldb.org/pvldb/vol14/p2779-ghosh.pdf">VLDB</a>. The work I did was spit into 2 categories - the first was minor/quick improvements to the original (open source) SQLCheck and the second was to establish forward motion on the second generation of SQLCheck. This involved updating dependencies, hardening the container images with a few best practices, and establishing patterns for sharing compute resources with other projects in the DB research group (egress networks, SSL configuration, etc). The Special Problems course is designed to mix a thesis-driven Masters degree with a coursework-driven Masters degree such that the learner can dip their toes in or immerse further in a research area. I won't give this a star rating because it is a phenomenal option for OMSCS students and Dr. Arulraj was a fantastic mentor, but I know my mental state was rapidly declining during 2022 and had I been engaged on this at a different time would have wished I got more out of it.</p>
<p><em>I was very worn out from the program at this point and on top of it 2 of our dogs passed away during the spring semester. I'm very glad that I stepped back to taking 1 class at a time to avert total meltdown and to afford more time to spend with Ellie and Louie during their final days.</em></p>
<h4 id="heading-summer-2022">Summer 2022</h4>
<h5 id="heading-software-analysis-and-testing-cs6340">Software Analysis and Testing (CS6340):</h5>
<p>⭐️⭐️⭐️__ (3/5 stars)</p>
<p>The <a target="_blank" href="https://omscs.gatech.edu/cs-6340-software-analysis">Software Analysis course</a> covered automated testing/analysis concepts, for example looking at how compilers can detect issues like uninitialized variables in code. Many of the topics were applicable to the security of software (eg fuzz testing) but there's a strong link to the importance of the tools that are used as software is written (or compiled). A good portion of the assignments/labs were leveraging <a target="_blank" href="https://llvm.org/">LLVM</a> C++ APIs, however an attempt was made to use TypeScript/JavaScript in a lab to demonstrate type-checking and delta debugging was explored in Java. No complaints about the variety of languages, only complaint was that the course seemed like it attempted to hard to be "light" with short assignments and only a single exam. It's not an exaggeration that a handful of the assignments were less than 60 minutes of work. This course could be significantly improved with more investment into the course assessments. Ok course and on the edge of being a good course.</p>
<ul>
<li><p>easy-to-moderate assignments, most of the difficulties coming from lab requiring an older version of LLVM</p>
</li>
<li><p>content chopped up into bite-size segments</p>
</li>
</ul>
<h5 id="heading-machine-learning-and-data-science-tooling-seminar-cs8001">Machine Learning and Data Science Tooling Seminar (CS8001):</h5>
<p>The seminar concept was introduced just in 2022 and I was thrilled! These offerings are 1-credit hour options to interact with others and work through a breadth of light material on a topic, ideal for semesters where you have just a little extra time expected but not enough for a full course. I very much recommend trying out even just 1 during OMSCS.</p>
<h4 id="heading-fall-2022">Fall 2022</h4>
<h5 id="heading-introduction-to-graduate-algorithms-cs6515">Introduction to Graduate Algorithms (CS6515):</h5>
<p>⭐️⭐️⭐️⭐️_ (4/5 stars)</p>
<p>I'm torn on the rating for <a target="_blank" href="https://omscs.gatech.edu/cs-6515-intro-graduate-algorithms">Intro to Graduate Algorithms</a> because it is a good course (4 stars) but it is a graduation requirement and structured differently than most other courses in the program, leading to unnecessary stress for the ~850 students each semester as they scratch and claw their way out (2 stars). The downfall of Georgia Tech's popularity is that this course bottlenecks at the end of the program and is all but impossible to register for until your last 1, maybe 2 semesters. A number of students take it more than 1 time to receive a passing grade and I understand that there's no real incentive to have students take this course earlier when they might quit the program instead of toughing it out in this course. It very much reminded me of teaching science for ITT Technical Institute to first-semester students and how the dean of students would come remind me, frequently, that I was grading too harshly and reducing the enrollment numbers for later courses. <em>Many other courses that were difficult to enroll in now have a reduced bottleneck so I hold out hope that in a few more years, graduate algorithms won't be the last class every student takes.</em></p>
<p>This course is notoriously difficult, almost a self-perpetuating cycle at this point, since a good portion of the difficulty comes from our responses to stress and inability to think under pressure. If you, like many other students in the program, are far removed from your last formal discussion of algorithms, learning how to even answer homework and exam questions will be difficult. That's ok, you can handle that. You're going to need to do a lot of practice problems anyway (dynamic programming, graph algorithms, divide and conquer, linear programming) to really grasp the material, and you're going to do them until you don't need your notes or any prompts to work out the answer completely. This class isn't a leetcode prep course, and although it does introduce some concepts that can be helpful in that prep you will write &lt; 200 lines of code for the class. Office hours (mini lectures) are provided weekly, which you can go to live or watch the recording. Study groups are recommended, and hopefully you are unafraid to leave/join study groups until you find one that works well for you. In your study group you should be able to ask questions and work through problems while your peers provide feedback (criticism, reinforcement, etc). When all is said and done, this class has three exams that are each 25% of your grade. The final exam is only an opportunity to replace one of those exam scores if you'd like to improve the grade you're at, an opportunity I've heard isn't available during the summer. I was sitting in the B range before the final exam and was quite happy to not need to take it.</p>
<h3 id="heading-research">Research</h3>
<p>If you made it through my course reviews, you probably noted that I had a few opportunities to do research.</p>
<ul>
<li><p><a class="post-section-overview" href="#vertically-integrated-project-big-data-and-quantum-mechanics">VIP</a></p>
</li>
<li><p><a class="post-section-overview" href="#education-technology-foundations-cs6460">Education Technology</a></p>
</li>
<li><p><a class="post-section-overview" href="#special-problems-cs8903">Special Problems</a></p>
</li>
</ul>
<p>Despite being a fully-online in both coursework and community program, Georgia Tech manages to offer multiple avenues for OMSCS students to do research - which is phenomenal. Whatever your aspirations are following OMSCS, there's an opportunity to dive deeper into a subject that interests you if you're ready to work independently.</p>
<h3 id="heading-cost">Cost</h3>
<ul>
<li><p>Fall 2020: $1,381</p>
</li>
<li><p>Spring 2021: $1,381</p>
</li>
<li><p>Summer 2021: $841</p>
</li>
<li><p>Fall 2021: $1,381</p>
</li>
<li><p>Spring 2022: $841</p>
</li>
<li><p>Summer 2022: $1,021 <em>(increased due to seminar enrollment)</em></p>
</li>
<li><p>Fall 2022: $647 <em>(decreased for all OMSCS students with removal of institutional fee)</em></p>
</li>
</ul>
<p><strong>Total: $7,493</strong></p>
<p>(One of my employer's benefits is a generous amount of tuition reimbursement, which I used to cover the cost of OMSCS.)</p>
<h2 id="heading-summary">Summary</h2>
<p>I'm proud of having completed OMSCS and I'm really grateful for all the learning opportunities I had as a part of the process. I don't have the same aspirations as I did during my first application to the program but I do know that even while in the program the knowledge I was picking up was immediately applicable to my work, to which I'll credit some of my growth at work to OMSCS. It was a tough program and offers a lot of options for students to create their path that benefits them the most.</p>
<p>Sometimes people would ask me "what's it like working and doing school part time?" or "I've been thinking about doing a masters, how is it?" - to which my response is usually:</p>
<blockquote>
<p>It's really awful and painful, but I love it.</p>
</blockquote>
<p>Thanks OMSCS, I loved it.</p>
]]></content:encoded></item><item><title><![CDATA[Row Level Security for Embedded PowerBI Reports with Service Principal Authentication]]></title><description><![CDATA[To present a PowerBI report user or consumer with a securely pre-filtered dataset, row level security must be used. In a PowerBI embedded architecture where “app owns data”, implementing row level security (RLS) requires a modification to the token g...]]></description><link>https://blog.drewsk.tech/row-level-security-for-embedded-powerbi-reports-with-service-principal-authentication</link><guid isPermaLink="true">https://blog.drewsk.tech/row-level-security-for-embedded-powerbi-reports-with-service-principal-authentication</guid><category><![CDATA[PowerBI]]></category><dc:creator><![CDATA[Drew Skwiers-Koballa]]></dc:creator><pubDate>Wed, 05 Feb 2020 20:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745861974716/4d08b181-1143-4bc0-95d2-80eb85ec566d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>To present a PowerBI report user or consumer with a securely pre-filtered dataset, row level security must be used. In a PowerBI embedded architecture where “app owns data”, implementing row level security (RLS) requires a modification to the token generation request. By specifying a role and user in the token request, we can generate an embed token specific to the user’s data access.</p>
<h2 id="heading-overview">Overview</h2>
<p>The application of RLS to PowerBI in an embedded scenario requires a 2-part change from the vanilla embed scenario. With the goal of embedding a report with row level security enforced, we must make changes to the embed token generation as well as to the report in PowerBI desktop.</p>
<p><img src="https://drewsktech.blob.core.windows.net/images/wp-content/uploads/2019/12/rls_serviceprincipal.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-obtaining-a-userrole-specific-embed-token">Obtaining a User/Role-Specific Embed Token</h2>
<p>We're going to start from the .NET Core example (<a target="_blank" href="https://drewsk.tech/powerbi-embedded-with-a-service-principal-account">previous post</a>) for generating an embed token. At line 61 we adjust the GenerateTokenRequest function to take in a new parameter, an <em>EffectiveIdentity.</em> Our GenerateTokenRequest goes from:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">var</span> generateTokenRequestParameters = <span class="hljs-keyword">new</span> GenerateTokenRequest(accessLevel: <span class="hljs-string">"view"</span>);
</code></pre>
<p>to:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">var</span> generateTokenRequestParameters = <span class="hljs-keyword">new</span> GenerateTokenRequest(
    <span class="hljs-string">"View"</span>, <span class="hljs-comment">// access level</span>
    <span class="hljs-literal">null</span>, 
    identities: <span class="hljs-keyword">new</span> List&amp;lt;EffectiveIdentity&gt; { <span class="hljs-keyword">new</span> EffectiveIdentity(
        username: <span class="hljs-string">"PutTheUserNameHere"</span>, 
        roles: <span class="hljs-keyword">new</span> List&amp;lt;<span class="hljs-keyword">string</span>&gt; { <span class="hljs-string">"RoleNameHereThisIsDefinedPerReport"</span>}, 
        datasets: <span class="hljs-keyword">new</span> List&amp;lt;<span class="hljs-keyword">string</span>&gt; { report.DatasetId }
    )}
);
</code></pre>
<p>If your code (like the sample) does not already include list structures, you will need to add a reference to System.Collections.Generic with this adjustment. Other necessary changes to your code include likely input parameters for the userID of the runtime user, potentially the role(s) as well.</p>
<h2 id="heading-preparing-a-report-for-row-level-security-in-powerbi-desktop">Preparing a Report for Row Level Security in PowerBI Desktop</h2>
<p>Without diving too far into the structure of your data, we’re going to play with a sample model so simple that it is a single table. This model fulfills the most important constraint for row level security – a field that can be used in a DAX expression to validate access by user or by role – in this case by including a single column for the user’s identifier.</p>
<p><img src="https://drewsktech.blob.core.windows.net/images/wp-content/uploads/2019/12/sample-data.png" alt class="image--center mx-auto" /></p>
<p>Back in the PowerBI report designer view under the <em>Modeling</em> tab, we find an option for <em>Manage Roles</em>. We are creating a single role called “Users” and it filters the single table in our dataset with the expression [USERID] = username(), that is the column USERID must equal the username.</p>
<p><img src="https://drewsktech.blob.core.windows.net/images/wp-content/uploads/2019/12/rows-security.png" alt class="image--center mx-auto" /></p>
<p><strong>Note!</strong> When you view the report in PowerBI desktop and a role is not applied to your user, RLS is not enforced. The view-as functionality can be useful to apply the RLS of a specific role. In the example below, no rows match the username ‘DSK-LAPTOP\drewk’ when the role ‘Users’ is applied.</p>
<p><img src="https://drewsktech.blob.core.windows.net/images/wp-content/uploads/2019/12/all-rows.png" alt class="image--center mx-auto" /></p>
<p><strong>Tip!</strong> Want to know what username is being used at report runtime? Add a measure to your dataset for <code>RUNTIMEUSERID = username()</code> and add the measure to a card visual.</p>
<h2 id="heading-the-resulting-row-level-security">The Resulting Row Level Security</h2>
<p>A GUID-style username is passed to the report at runtime along with the role name ‘Users’, matching a subset of the rows. The resulting dataset and report embed token have RLS enforced, displaying only 57 of the total 114 rows.</p>
<p><img src="https://drewsktech.blob.core.windows.net/images/wp-content/uploads/2019/12/filtered-rows.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-references">References</h2>
<ul>
<li><p><a target="_blank" href="https://docs.microsoft.com/en-us/power-bi/developer/embedded-row-level-security#applying-user-and-role-to-an-embed-token">https://docs.microsoft.com/en-us/power-bi/developer/embedded-row-level-security#applying-user-and-role-to-an-embed-token</a></p>
</li>
<li><p><a target="_blank" href="https://docs.microsoft.com/en-us/power-bi/service-admin-rls#using-the-username-or-userprincipalname-dax-function">https://docs.microsoft.com/en-us/power-bi/service-admin-rls#using-the-username-or-userprincipalname-dax-function</a></p>
</li>
<li><p><a target="_blank" href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.powerbi.api.v2.models.effectiveidentity?view=azure-dotnet">https://docs.microsoft.com/en-us/dotnet/api/microsoft.powerbi.api.v2.models.effectiveidentity?view=azure-dotnet</a></p>
</li>
<li><p><a target="_blank" href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.powerbi.api.v2.models.report?view=azure-dotnet">https://docs.microsoft.com/en-us/dotnet/api/microsoft.powerbi.api.v2.models.report?view=azure-dotnet</a></p>
</li>
<li><p><a target="_blank" href="https://docs.microsoft.com/en-us/power-bi/service-admin-rls">https://docs.microsoft.com/en-us/power-bi/service-admin-rls</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[PowerBI Embedded with a Service Principal Account]]></title><description><![CDATA[Throughout this month I’ve been working on our embedded PowerBI architecture – improving performance, streamlining administration, and reducing costs. We have a small PowerBI premium capacity that allows us to serve PowerBI reports and dashboards to ...]]></description><link>https://blog.drewsk.tech/powerbi-embedded-with-a-service-principal-account</link><guid isPermaLink="true">https://blog.drewsk.tech/powerbi-embedded-with-a-service-principal-account</guid><category><![CDATA[PowerBI]]></category><dc:creator><![CDATA[Drew Skwiers-Koballa]]></dc:creator><pubDate>Sun, 22 Dec 2019 20:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745862046563/4273f23d-6878-447a-b1a5-8024fdea78f4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Throughout this month I’ve been working on our embedded PowerBI architecture – improving performance, streamlining administration, and reducing costs. We have a small PowerBI premium capacity that allows us to serve PowerBI reports and dashboards to internal and external users in our apps without individually provisioning licensing. In PowerBI embedded documentation, this is commonly referred to as the “app owns data” architecture. <strong>In this post, I will cover the implementation of a service principal for authentication and accessing the PowerBI embedded API.</strong></p>
<p>Previously, PowerBI embedded in the “app owns data” architecture required the use of a master username/password combination when retrieving an embed token. Fortunately – a <a target="_blank" href="https://docs.microsoft.com/en-us/power-bi/developer/embed-service-principal">service principal</a> can replace a master account in v2 workspaces and provides a pathway to a superior authentication implementation for “app owns data” scenarios.</p>
<p><img src="https://drewsktech.blob.core.windows.net/images/wp-content/uploads/2019/12/service_principal.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-generate-an-embed-token-for-service-principal-authentication">Generate an Embed Token for Service Principal Authentication</h2>
<p>The PowerBI JavaScript API for embedding reports/dashboards requires an embed token and report embed URL to be passed for the specific report. We obtain this information within a <a target="_blank" href="https://github.com/dzsquared/powerbi-appownsdata-serviceprincipal/blob/master/EmbedInfo.cs">.Net Core Azure Function</a> – to do so, we need 3 pieces of information:</p>
<ul>
<li>PowerBI Workspace Id</li>
<li>Azure AD app registration – Application (client) Id and Client Secret</li>
<li>Report Id – can be sent at runtime</li>
</ul>
<p><img src="https://drewsktech.blob.core.windows.net/images/wp-content/uploads/2019/12/PBI_groupID.png" alt class="image--center mx-auto" /><br /><em>Workspace Id Visible in the Browser URL</em></p>
<h2 id="heading-previous-architecture-and-changes">Previous Architecture and Changes</h2>
<h5 id="heading-powerbi-workspace-and-premium-capacity">PowerBI Workspace and Premium Capacity</h5>
<p>The established PowerBI workspace needed to be upgraded from v1 to v2, which was nearly trivial and completed in place. If you are starting from scratch – you can create the <a target="_blank" href="https://docs.microsoft.com/en-us/power-bi/service-admin-premium-manage">PowerBI workspace</a> from the PowerBI web interface and a <a target="_blank" href="https://docs.microsoft.com/en-us/power-bi/developer/azure-pbie-create-capacity">PowerBI premium capacity</a> from the Azure Portal.</p>
<h5 id="heading-azure-active-directory-app-registration">Azure Active Directory App Registration</h5>
<p>We already have an Azure AD app registered with the necessary permissions for the PowerBI API and will continue to use that same registered application with the addition of the service principal. To do so, we now need to generate a Client Secret in the application.</p>
<p>The client secret, with the client/app id, is used to create the credential to authenticate our service principal with Azure AD.</p>
<p>The ” Application (client) ID” is found on the app registration overview tab. The client secrets are managed under “certificates &amp; secrets.”</p>
<p><img src="https://drewsktech.blob.core.windows.net/images/wp-content/uploads/2019/12/app_clientsecret.png" alt class="image--center mx-auto" /><br /><em>Azure AD App – Client Secret</em></p>
<pre><code class="lang-csharp"><span class="hljs-keyword">string</span> ADdomain = Environment.GetEnvironmentVariable(<span class="hljs-string">"ADdomain"</span>);
<span class="hljs-keyword">string</span> ClientId =  Environment.GetEnvironmentVariable(<span class="hljs-string">"AppClientId"</span>); 
<span class="hljs-keyword">string</span> ClientSecret = Environment.GetEnvironmentVariable(<span class="hljs-string">"AppClientSecret"</span>);
</code></pre>
<p>After grabbing those values from application settings, they are used with the <a target="_blank" href="https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory/3.19.8">ADAL.NET</a> to authenticate against Azure AD. I wasn’t ready to jump to version 5 of ADAL.NET, so this example relies on 3.19.8. </p>
<pre><code class="lang-csharp"><span class="hljs-keyword">var</span> credential = <span class="hljs-keyword">new</span> ClientCredential(ClientId, ClientSecret);

<span class="hljs-comment">// Authenticate using created credentials</span>
<span class="hljs-keyword">string</span> AuthorityUrl = <span class="hljs-string">"https://login.microsoftonline.com/"</span>+ADdomain+<span class="hljs-string">"/oauth2/v2.0/authorize"</span>;
<span class="hljs-keyword">var</span> authenticationContext = <span class="hljs-keyword">new</span> AuthenticationContext(AuthorityUrl);
<span class="hljs-keyword">var</span> authenticationResult = <span class="hljs-keyword">await</span> authenticationContext.AcquireTokenAsync(ResourceUrl, credential);
</code></pre>
<h2 id="heading-creating-a-service-principal">Creating a Service Principal</h2>
<p><a target="_blank" href="https://docs.microsoft.com/en-us/power-bi/developer/embed-service-principal">Creating a service principal</a> and associating it with the Azure AD app can be accomplished through the Az Powershell module. It’s worth noting that when you start associating a service principal with an Azure AD app, previous permissions within Azure AD are terminated. This means that your prior integration may break if the permissions weren’t set within the PowerBI admin portal.</p>
<p>The Powershell script below takes your Azure AD app’s Application (Client) Id – as used above – then creates a service principal in that app (line 10), generates credentials for it, creates an Azure AD security group (line 16), and finally adds the service principal to the security group.</p>
<pre><code class="lang-powershell"><span class="hljs-built_in">import-module</span> Az

<span class="hljs-comment"># Required to sign in as a tenant admin</span>
<span class="hljs-built_in">Connect-AzAccount</span>

<span class="hljs-comment"># Create a new AAD web application</span>
<span class="hljs-variable">$app</span> = <span class="hljs-built_in">Get-AzADApplication</span> <span class="hljs-literal">-ApplicationId</span> <span class="hljs-number">12345678</span><span class="hljs-literal">-1234</span><span class="hljs-literal">-1234</span><span class="hljs-literal">-1234</span><span class="hljs-literal">-123456789012</span>

<span class="hljs-comment"># Creates a service principal</span>
<span class="hljs-variable">$sp</span> = <span class="hljs-built_in">New-AzADServicePrincipal</span> <span class="hljs-literal">-ApplicationId</span> <span class="hljs-variable">$app</span>.ApplicationId <span class="hljs-literal">-DisplayName</span> <span class="hljs-string">"PowerBI_ServicePrincipal"</span>

<span class="hljs-comment"># Get the service principal key.</span>
<span class="hljs-variable">$key</span> = <span class="hljs-built_in">New-AzADSpCredential</span> <span class="hljs-literal">-ObjectId</span> <span class="hljs-variable">$sp</span>.ObjectId

<span class="hljs-comment"># Create an AAD security group</span>
<span class="hljs-variable">$group</span> = <span class="hljs-built_in">New-AzADGroup</span> <span class="hljs-literal">-DisplayName</span> <span class="hljs-string">"PowerBI_Security"</span> <span class="hljs-literal">-MailNickName</span> notSet

<span class="hljs-comment"># Add the service principal to the group</span>
<span class="hljs-built_in">Add-AzADGroupMember</span> <span class="hljs-literal">-TargetGroupObjectId</span> <span class="hljs-variable">$</span>(<span class="hljs-variable">$group</span>.ObjectId) <span class="hljs-literal">-MemberObjectId</span> <span class="hljs-variable">$</span>(<span class="hljs-variable">$sp</span>.ObjectId)
</code></pre>
<p>Once the service principal and enclosing security group is created, you have 2 steps in the PowerBI admin portal to enable the service principal for the workspace:</p>
<p><img src="https://i0.wp.com/docs.microsoft.com/en-us/power-bi/developer/media/embed-service-principal/admin-portal.png?w=740&amp;ssl=1" alt /><br /><em>Enable Service Principals</em></p>
<ol>
<li>Enable Service Principals in <a target="_blank" href="https://app.powerbi.com/admin-portal/tenantSettings">Tenant settings</a> &gt; Developer settings (scroll down) &gt; Allow service principals to use Power BI APIs. For best security practices, limit this to the security group you created above.</li>
<li>Add the security group as an admin of the PowerBI workspace – in the PowerBI portal within the workspace, select <em>access</em> in the upper right. </li>
</ol>
<h2 id="heading-the-azure-function">The Azure Function</h2>
<p>In our architecture, an Azure Function takes a report Id as a query string for a report that has been published to the workspace. The function outputs the 3 key items necessary for embedding a report via the <a target="_blank" href="https://github.com/microsoft/PowerBI-JavaScript">PowerBI JavaScript library</a>: report Id, report embed URL, and an embed access token.</p>
<p>After the Azure AD authentication of the registered app, as above, the credentials are transformed for use by the PowerBI API. The <em>authInfo</em> object is a template for the return content.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">var</span> tokenCredentials = <span class="hljs-keyword">new</span> TokenCredentials(authenticationResult.AccessToken, <span class="hljs-string">"Bearer"</span>);

<span class="hljs-comment">// Create a Power BI Client object. It will be used to call Power BI APIs.</span>
<span class="hljs-keyword">using</span> (<span class="hljs-keyword">var</span> client = <span class="hljs-keyword">new</span> PowerBIClient(<span class="hljs-keyword">new</span> Uri(ApiUrl), tokenCredentials))
{
    <span class="hljs-keyword">var</span> report = <span class="hljs-keyword">await</span> client.Reports.GetReportInGroupAsync(GroupId, reportInfo);

    <span class="hljs-comment">// Generate Embed Token.</span>
    <span class="hljs-keyword">var</span> generateTokenRequestParameters = <span class="hljs-keyword">new</span> GenerateTokenRequest(accessLevel: <span class="hljs-string">"view"</span>);
    <span class="hljs-keyword">var</span> tokenResponse = <span class="hljs-keyword">await</span> client.Reports.GenerateTokenInGroupAsync(GroupId, report.Id, generateTokenRequestParameters);

    <span class="hljs-keyword">if</span> (tokenResponse == <span class="hljs-literal">null</span>)
    {
        log.LogInformation(<span class="hljs-string">"Failed to generate embed token."</span>);
    }

    authInfo.accessToken = (<span class="hljs-keyword">string</span>)tokenResponse.Token;
    authInfo.embedUrl = (<span class="hljs-keyword">string</span>)report.EmbedUrl;
    authInfo.embedReportId = (<span class="hljs-keyword">string</span>)report.Id;
}
</code></pre>
<p>The resulting Azure Function can be called by our application to supply the embed token and embed URL for the report. For the full function and the PowerShell script, check out my example repository: <a target="_blank" href="https://github.com/dzsquared/powerbi-appownsdata-serviceprincipal">powerbi-appownsdata-serviceprincipal</a> </p>
<h2 id="heading-references">References</h2>
<ul>
<li><a target="_blank" href="https://docs.microsoft.com/en-us/power-bi/developer/embed-service-principal">Embed service principal</a>  </li>
<li><a target="_blank" href="https://docs.microsoft.com/en-us/power-bi/developer/embed-sample-for-customers">Embed sample for customers</a>  </li>
<li><a target="_blank" href="https://docs.microsoft.com/en-us/powershell/module/az.resources/new-azadspcredential?view=azps-3.2.0">New-AzADSpCredential</a>  </li>
<li><a target="_blank" href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.powerbi.api.v2.reportsextensions.getreportingroup?view=azure-dotnet">GetReportInGroup</a>  </li>
<li><a target="_blank" href="https://github.com/microsoft/PowerBI-Developer-Samples/tree/master/App%20Owns%20Data">PowerBI Developer Samples</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Azure Logic Apps and Form-Data HTTP Requests]]></title><description><![CDATA[The nuts and bolts of this post is about sending an HTTP POST request in an Azure Logic App that utilizes the multipart/form-data content type. I don't run into it often, but when I do, I'm sure glad I figured out how to do more than application/json...]]></description><link>https://blog.drewsk.tech/azure-logic-apps-and-form-data-http-requests</link><guid isPermaLink="true">https://blog.drewsk.tech/azure-logic-apps-and-form-data-http-requests</guid><category><![CDATA[Azure]]></category><category><![CDATA[Azure Logic Apps]]></category><dc:creator><![CDATA[Drew Skwiers-Koballa]]></dc:creator><pubDate>Wed, 06 Mar 2019 20:00:00 GMT</pubDate><content:encoded><![CDATA[<p>The nuts and bolts of this post is about sending an HTTP POST request in an Azure Logic App that utilizes the multipart/form-data content type. I don't run into it often, but when I do, I'm sure glad I figured out how to do more than application/json request bodies in Logic Apps. The use case I came across this week for multipart/form-data body was for the <a target="_blank" href="https://documentation.mailgun.com/en/latest/api-sending.html#sending">Mailgun</a> API.</p>
<h2 id="heading-the-mailgun-example">The Mailgun example</h2>
<pre><code class="lang-bash">curl -s --user <span class="hljs-string">'api:YOUR_API_KEY'</span> \
    https://api.mailgun.net/v3/YOUR_DOMAIN_NAME/messages \
    -F from=<span class="hljs-string">'Excited User &lt;mailgun@YOUR_DOMAIN_NAME&gt;'</span> \
    -F to=YOU@YOUR_DOMAIN_NAME \
    -F to=bar@example.com \
    -F subject=<span class="hljs-string">'Hello'</span> \
    -F text=<span class="hljs-string">'Testing some Mailgun awesomeness!'</span>
</code></pre>
<p>The above example is directly from the Mailgun documentation, but we need to translate it into an Azure Logic APPS HTTP request.</p>
<ul>
<li><p>This is a POST request</p>
</li>
<li><p>The URL is https://api.mailgun.net/v3/YOUR_DOMAIN_NAME/messages</p>
</li>
<li><p>You need a header for "Content-Type" with a value that sets form-data and a boundary. This example value would work just fine "multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW"</p>
</li>
<li><p>We'll cover the body text area in a moment.</p>
</li>
<li><p>Authentication is basic, username is api and the password is your private api key from the mailgun dashboard.</p>
</li>
</ul>
<p>This is a POST request - The URL is https://api.mailgun.net/v3/YOUR_DOMAIN_NAME/messages - You need a header for "Content-Type" with a value that sets form-data and a boundary. This example value would work just fine "multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW" - We'll cover the body text area in a moment. - Authentication is basic, username is api and the password is your private api key from the mailgun dashboard.</p>
<h3 id="heading-request-body">Request Body</h3>
<pre><code class="lang-bash">------WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name=<span class="hljs-string">"from"</span>

Excited User &amp;lt;mailgun@YOUR_DOMAIN_NAME&gt;
------WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name=<span class="hljs-string">"to"</span>

YOU@YOUR_DOMAIN_NAME
------WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name=<span class="hljs-string">"subject"</span>

Hello
------WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name=<span class="hljs-string">"text"</span>

Testing some Mailgun awesomeness!
------WebKitFormBoundary7MA4YWxkTrZu0gW--
</code></pre>
<p>The body content is comprised from sets of 4 rows following the form field boundary value:</p>
<ul>
<li><p>label for the form field, such as Content-Disposition: form-data; name="from"</p>
</li>
<li><p>empty line</p>
</li>
<li><p>form field value</p>
</li>
<li><p>the form field boundary value as stated in the Content-Type header value</p>
</li>
</ul>
<h2 id="heading-through-postman">Through Postman</h2>
<p>If you're working with HTTP requests I highly recommend a tool such as Postman to test, save, and modify your endpoints. In the instance of creating a form-data request for Azure Logic apps, the "Code" functionality in Postman can save you a bit of time.</p>
<p>You would start by building the POST request you would like to make in Postman, including the form values in the Body tab.</p>
<p><img src="https://drewsktech.blob.core.windows.net/images/wp-content/uploads/2019/03/formdata_postman.png" alt="Completed request in Postman" class="image--center mx-auto" /></p>
<p>After the information is entered into Postman, click the small "Code" action in the upper right, which opens a popup with the request build in a selection of languages. We want to switch <em>HTTP</em> to <em>Java OK HTTP</em>.</p>
<p><img src="https://drewsktech.blob.core.windows.net/images/wp-content/uploads/2019/03/postman_code.png" alt class="image--center mx-auto" /></p>
<p>Two sections of code are important to grab from here for use in Azure Logic apps. The first is the highlighted request body code, highlighted. The second is the header value for content-type on line 8, which is <code>"multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW"</code> in this example.</p>
<p>Back in the Azure Logic apps UI, switch from <em>Designer</em> to <em>Code view</em>. You will paste the request body in for the <em>body</em> value here.</p>
<p><img src="https://drewsktech.blob.core.windows.net/images/wp-content/uploads/2019/03/codeview.png" alt class="image--center mx-auto" /></p>
<p>You will need to complete the rest of the request values, including the Content-Type header, for your request to be complete.</p>
<h2 id="heading-azure-logic-app-http-requests">Azure Logic App HTTP Requests</h2>
<p>I love leveraging Logic Apps to simplify the interaction from the endpoint application and reduce potential change points when a 3rd party has API changes. Most frequently I'm hitting a POST endpoint that accepts a JSON body, but I still occassionally see form-data. By creating a boundary string in the header and setting the body up with the 4-line sets (label, blank, value, boundary), you can send these requests in Azure Logic Apps.</p>
]]></content:encoded></item></channel></rss>