Architecture Decisions

I’ve spent multiple years in a wide array of different teams. One thing that separates prepared teams over chaotic ones is, in a nutshell, documentation. This seems to be known by most of those teams I’ve worked with. Yet, through a series of unfortunate events or decisions, documentation is not prioritized. But there’s one pretty easily way to bring documentation to your team without dedicating tons of team hours to it: choose tools and technologies that already have good documentation.

In this field, we can become distracted by shiny new things. Whether that be trying to create our own security framework (in 99.9% of cases, please don’t) to using a cool new language, we often find ourselves stuck with a tool that is more trouble than it’s worth. Ramp up time for new employees tends to get strung out, standardization turns into passive aggressive changes to their ‘wrong code’ per commit, and future employees curse your name as they try to fix the Pandora’s box you handed them. All because of poor documentation.

If given the choice between two fairly equal choices, choose the one with better documentation. You can send your new folks to it to get up to speed quickly. Standardization will already be set, so agreements can be squashed more easily. Future employees have something to lean on when navigating through legacy code. Everyone is much happier for it, even if you couldn’t implement the shiny thing.

To note, this isn’t a blanket rule. In rare occasions, people are in the right place to write documentation for their project, or create that awesome security framework that fills a gap. But in most cases (especially if you’re working on enterprise apps), side with already-documented tools. I’ll thank you for it in advance.

This is but a small snippet, and is based off my own experience and conversations with coworkers and leaders in the field in my area. Tell me in the comments if you’ve got a different perspective.


Candor in the Workplace

Growing up, I’d received a very particular impression from a couple family members, but mostly friends, other students and neighbors. That impression was this: if you don’t stand firm and say what’s on your mind, you’re a coward. Having a problem with someone, not voicing it and likely talking behind the scenes about it would cement a person as weak. Working at blue-collar jobs for some years solidified that as well.

There’s some valuable and admirable things in that. I find it courageous to not back down, but address problems head-on. The execution of that was often shaky at best. Addressing a person in public was common, and left it to be aired out for everyone to hear. This often ended in ridicule for the related parties. Not graceful by any measure, and to a certain degree, this exacerbates problems rather than solve them.

In many cases though, I’ve found being frank and honest to be invaluable. In the white-collar world, it’s common practice to be secretive, and also try to “lay low” to avoid undesirable attention. This can often cause problems in companies, as there’s often a lack of trust among coworkers and real communication and problem solving can be hindered. On the flip-side, being consistently sincere gives the listener more confidence in what you say and do and gives an opportunity to air grievances and address them legitimately, making for a happier and healthier team.

Also, on the positive side, being open about good things is a great morale boost. People love to feel good about what they do, and so expressing gratitude and appreciation can really make someone’s day and push them on to further growth. I’ve seen many be uncomfortable with this, as it’s fairly uncommon or seems cheesy. But it doesn’t have to be a grand celebration with confetti and balloons: simply acknowledging someone for consistently good work, or for their part in a large project, or even for just being an easy person to talk to is pretty simple and can really put a smile on their face.

With being out of blue-collar work and into IT for 6 years now, one would think the way to communicate appropriately would have sunk in. That isn’t the case, though. This week, I found myself in a situation where I was surprised and jumped to a conclusion. This coincided with a rough night of sleep and some medicinal side effects, which made being overly emotional much more easy. I could have been up-front in a private way, but I allowed emotions to cloud my judgment and referred to the aforementioned form of expression. I’d aired my concerns publicly, and with a good layer of dramatics.

Without realizing it, the situation I’d brought up was completely intentional, and was in fact done for the good of other teammates. What I’d hoped would occur was in the works, and if I’d taken the time to speak to someone privately, I would’ve figured this out. Now, don’t get me wrong: I don’t regret that I was forthright and honest, or that I felt passionate about what I was talking. It was the execution of communication that was the problem, and it made things worse rather than better.

The take-away I got from this experience is that it’s far more effective to take time to create a plan for being honest. I can tend to be emotional, following my heart rather than my head. I need to be more intentional about how and where I express myself, and leave old habits in the past where they belong.

Done With Social Media

Over the last few months, I’ve become tired with social media. From data leaks to lack of guts to stand up to harassment and bullying. From feeling like I’m often talking to empty air to the inability to have a reasonable debate, social media has lost all value to me. I figure if anyone wants to honestly talk to me, they have my number, know where I live, can meet me out and about, probably have my email, etc. In the meantime, I’m going to go do something actually productive with my time. Maybe get more projects done. Consider writing a book. I’ll figure it out.


Michael Bowman

Taking Back My Life

“It’s been a long time comin’, but I know a change gon’ come. Oh yes it will.”

These words come from a favorite song of mine, aptly called A Change Is Gonna Come by Sam Cooke. It’s a song filled with inspiration and hope, a desire to push forward and hold onto your dream. It holds a very significant message in relation to the Civil Rights movement, and also holds a personal meaning for me.

For the last 3 years, my dream has been to holistically feel good. Through many discussions, percussions, appointments, disappointments, tests, prescriptions, preclusions, struggles and victories, I am glad to say that change has finally come.

A little back-story: I’ve always been a particularly emotional guy. Mostly I’ve attributed that to a no-puppies-and-rainbows childhood and subsequent early adulthood. However, as things got better, and more breakthroughs were had in relation to that speckled past, I wasn’t feeling better. Once I was able to sit alone in my own head for awhile and get rid of all the external excuses, the internal brokenness revealed itself.

I started working at Heuristic Solutions in July of 2015. It was a great opportunity to work in test engineering, which I believe truly came about thanks to the likes of Matt Groves, Seth Petry-Johnson and Calvin Allen, all friends who worked there at the time.

Why did I just state all that? Well, it was a work-from-home position.

That’s great, right?

Yes, of course it is. Let me explain.

When working from home, you get the benefits of not having to travel much, which saves quite a bit on maintenance, gas and time. You also get to dress and organize your office as you see fit, which is pretty awesome as well. The downsides though are, if you’re not careful, you might forget to see the light of day for quite awhile. That lack of Vitamin D-forming sunshine can cause one to feel a bit sluggish and low. I did this, and felt low. REALLY low. Just to be candid, I was having significant relationship issues, was overworking, was feeling extremely inadequate in pretty much all areas, and it all culminated in not wanting to live anymore.

I take a moment to thank my wife, here. Jen, if you’re reading this right now, thank you for saving me. You didn’t have to, but you did, and you’re the most beautiful person to me for that.

My wife struggled and fought with me for a long time until she convinced me to go to the doctor. It didn’t take long to diagnose me with Manic Depressive Disorder and start the long and arduous road to figuring out what medications work for me. There are a lot out there, since depression is an odd beast with no silver bullet.

We also took some time to go to a marriage counselor. To note: if you’re going through any sorts of health issues, your loved ones are, too. They’re trying to help, trying to figure out what they can and can’t do, wanting badly to fix you but knowing they can’t, and all the while trying to understand while not getting frustrated. Counseling helped to bring these topics out and bring about more understanding and suggestions on what can and can’t be done.

With these things in mind, we had a battle plan. Slowly, but surely, I started to feel more level-headed, more energetic, more at peace with myself. Improvements were made, and it was great. But I noticed I wasn’t completely healthy still.

I felt tired all the time, like I didn’t get very good sleep. From mid-2017 to early 2018, I started to feel like I was getting no sleep at all. I’d yawn all day long, I couldn’t focus on anything, my mind was a cloud and I was floating through life. My work was slow and sloppy. I stopped knowing what day of the week it was, or sometimes even what month it was. I started having trouble remembering fairly common English words, and would often look to others and try to describe what I’m trying to say so they could fill in the blank.

Back to the doctor. He’s not a sleep specialist, so I was referred to a sleep lab. After plenty of complications there (which I won’t go into for legal reasons), I was diagnosed with sleep apnea. EXTREME sleep apnea.

I’m gonna explain this a little. If you’re not interested in it, skip ahead a paragraph.

A person with sleep apnea has a series of things going on at night that cause them to get poor sleep due to breathing issues. Everyone is familiar with snoring, but there’s also moments where the person stops breathing at all, and then the body forces a large gasp of breath to stay alive, which wakes up the person for just a moment. When a sleep study is done, these events are recorded. If the person has at least 5 of these events per hour, they are considered to have sleep apnea. If the person has 20 or 30 per hour, they’re considered to have severe sleep apnea. Once I had my sleep study, it was found that I had about 60 per hour. That means that I, on average, stopped breathing and gasped for breath once every minute while sleeping. Essentially, I wasn’t sleeping at all.

Fortunately, there is hardware to help solve this. I received a CPAP machine, which blows air into my nose, keeping my windpipe open to prevent it collapsing, causing the stop in breathing. I’ve been using it for about 2 weeks now, and I haven’t felt this good in so many years.

Before all of this happened, I didn’t like going to the doctor, didn’t trust them, didn’t want to take medication, and just wanted to make my body work like it should. But at some point, I put away my pride and did what I needed to do to take back my life. I’m still taking medication and getting regular checkups, and I’m fine with that. It’s all in the name of turning me back into stubborn-as-a-bulldog Mike.

I’m starting to get back into things that I’ve been missing out on for so long, and probably things that many of you attend that you might have noticed I haven’t been going to for a while. I’m gonna start speaking again, and am in the process of getting a streaming channel going. I’m getting back into fixing things and woodworking. Getting back into being me.

Damn, it feels good!

Seriously though, if you’re not feeling good and you’re hesitant to talk to someone about it, don’t wait. Don’t put it off another second. Call your doctor. Schedule an appointment. Go to the appointment. Be honest with the doctor. Follow through with the prescriptions. Be diligent about finding the solution. Your loved ones want you to be you. Life is far too short to feel like crap. If you wanna talk through anything, my DM box is open at



An Update On My ASP.NET Core Project

My previous post reviewed over utilizing ASP.NET Core to create a Web API endpoint. In that post, I stated that there was nothing as of that writing that prevented me from continuing the usage of ASP.NET Core and .NET Core. However, as time went on, it was decided to use OData. This is where my roadblock appeared. As of the writing of this post, OData support does not currently exist for ASP.NET Core. I’ve got a separate project that didn’t make sense to use OData, so we’re going with ASP/.NET Core for that project. I’m pretty excited about that, considering all the benefits I’d talked about in my previous post.

For those unfamiliar, OData (Open Data Protocol) is a standard that defines how to build and utilize RESTful APIs. This standard is utilized in the .NET world via the Microsoft.Data.OData Nuget package, and can be utilized in an ASP.NET project using the Microsoft.AspNet.OData Nuget package (both described in more detail here). With OData, a client can specify what data they would like returned via a query string or other parameter. This in turn is translated into a specific query that only returns the specified dataset. This is very useful for minimizing data traffic and ensuring the client gets what they need.

For the project in question, we had a project returning JSON data with multiple fields and child objects. The total payload, including all objects and fields, was over 600MB. For a mobile experience, this would be unacceptable. There would also be multiple clients requesting different pieces of this data at any given time. While the clients could receive the full payload, and just ignore the fields and objects they don’t need, this is a waste of traffic. I could also create different endpoints for each need, but that could become difficult to manage over time as more needs arise. OData gave us the opportunity to specify a generic endpoint, and allow the client to decide what data they receive.

Looking into OData support for ASP/.NET Core, I found nothing. Looking deeper into the documentation, I found the RESTier project. At face value, this seems to be utilizing a more ASP.NET Core approach to creating an OData endpoint. However, it is also not supported for ASP/.NET Core at this time. Perhaps some time in the future, this may change.

In the meantime, I’ve pulled that project over to a full .NET 4.6.2 and AspNet.Mvc 5.2.3 project. That process was not too difficult; I created a new ASP.NET MVC project, added the existing files to it, switched IActionResults to IHttpActionResults, changed NoContent() results to StatusCode(HttpStatusCode.NoContent) results, and swapped out the Core Nuget packages for full .NET packages and using statements. Overall, it wasn’t too painful, even if I’d rather avoid doing it again.

Thanks for reading!

Getting Started With ASP.NET Core

Over the course of the last month, I’ve had the opportunity to learn, experiment with, and implement a Web API project utilizing ASP.NET Core. While this has been a very edifying experience, I found that the details of the newer technology is a bit spread out across multiple sources. My aim of this post is to compile those details, describe how I’ve started out, and explain what’s available and possible.

The first thing that I’d like to review over is probably my favorite part of .NET Core as a whole: portability. Do you want to use the rich features of Visual Studio to develop your application? Great, that still works as before. Would you rather just pull up Visual Studio Code or other simpler editor to make changes, big or small, and then build the code easily? The dotnet CLI gives you that capability, along with being able to create new projects, run tests, publish, and others. Do you want to develop .NET software, but don’t have an installation of Windows? No worries – .NET Core can be installed on Linux and Mac.

Next, we tackle one of the concerns that I’ve had – dependencies. Many well-used Nuget packages simply aren’t compatible with .NET Core. One would think this is a deal-breaker, but that’s not quite the case. Many third parties, such as the xUnit team, have created a .NET Core-specific Nuget package to be used in this instance. As of the writing of this blog post, I’m successfully using xUnit 2.2.0-beta5-build3474 and xUnit.Runner.VisualStudio 2.2.0-beta5-build1225 (the runner is used to run the tests in Visual Studio’s Test Runner). These versions can be found in the Nuget Package Manager by checking the box to ‘Include prerelease’ packages, which is next to the search box. As further proof of this point, I’m also using NLog.Web.AspNetCore version 4.3.1 for logging, and Swashbuckle.AspNetCore version 1.0.0-rc3 to bring in the visual documentation tool Swagger UI.

For anyone who has created an ASP.NET/Web API project in the past, you may find that some namespaces have either moved, or are not implemented at all. An example of this is when pulling settings from a configuration file. With the full version of .NET, one may likely use ConfigurationManager. However, as Jonathan Danylko succinctly states here, it is not available in .NET Core. Not all hope is lost for configuration-file based settings – dependency injection can be used to pass around the settings (see his post on the matter, or the documentation, for more information). Thus far, I’ve found nothing of this nature that prevented me from continuing with the project. The pieces seem to all be there, and may just be implemented in a different way.

From logging to dependency injection, unit testing to data access, and many things in-between, ASP.NET Core has proven to me that it is a stable, reliable option to make cross-platform APIs. If you have a project coming up that could use flexibility, extensibility and mobility, and/or you would like to work with some really interesting technology, I recommend this stack to you.

Windows 10 and VMWare Player In A Dishonest Relationship

Hello everyone,

In wanting to set up Windows 10 and check out the new Microsoft Edge browser, I decided to use VMWare Player. I’m a bit more experienced with Hyper-V, and so figured it would only be a benefit to expand my horizons a bit and go non-MS on the virtualization side.

Initial installation was pretty easy: I downloaded the Windows 10 x64 ISO, downloaded VMWare Player, and began installing the player. Adding a new virtual machine was pretty easy: select the option to select an Installer disc image file (iso), put in the license key provided on the ISO download page, and then name the vm and give it some hard drive space.

I ran into an issue where it was throwing an error saying Windows couldn’t read the unattend file for the license key. I figured out this was due to VMWare loading up a floppy drive by default (we’re in 2015, right ;)). Going to Player -> Removable Devices -> Floppy -> Disconnect resolves that little issue. Rebooting and trying again enabled me to continue with Windows initial setup, which was a Next -> Next -> Finish type of affair (YMMV).

I noticed once Windows booted that the screen resolution was extremely small. The viewing area was a little smaller than the size of two decks of playing cards side by side on my Surface Pro 3. Going into the resolution settings in Windows 10 didn’t help, as it was already maxed out at 1024×768. After some digging, I found that VMWare player passes the available resolutions through its VMWare Tools plugin. It seems the Windows 10 version of this plugin is very basic.

So, to fix, I first shut down the VM. Then, I went to the settings of the virtual machine, and moved over to the Options tab. There, I changed the operating system setting from Windows 10 to Windows 8. That way, already established tools will be installed, and should be compatible with the new OS. Needless to say…

I think it worked

I think it worked

Even though I’m promoting a dishonest relationship here (where Windows 10 is pretending to be Windows 8), it seems to be a solid resolution with no currently obvious ill-effects. These two can work out their partnership over dinner. 🙂

Thanks for reading!