It's easy to think that "code is code" and that the rules are always the same when you write it. It should be reusable; it should use clean, clear functions; it it should run as fast as possible; it should completely handle every error case.
The thing is, none of these are true in every context. There are a lot of different types of code, and a lot of different code environments. They each come with a different set of rules and constraints. Some code needs to be fast, some code needs to have error handling, some code needs to be readable. But some code doesn't.
Don't try to optimize all the code that you write. Most code is already quite fast on a human timescale, and an inefficient algorithm is probably good enough for most use cases. If the 80-20 rule holds true, then 80% (or more) of your execution time will take place in 20% (or less) of your code. If we assume that highly optimized code takes 100% longer to write, then optimizing all of your code wastes about 40% of your time.
Fail as early as possible when you encounter errors. While predictable error scenarios like bad user input or unavailable third party services should be handled gracefully, code bugs should trigger a hard crash of the system so the developer knows to come and fix it. Examples that should trigger a crash are invalid database credentials, bad data at any point where your program can reasonably expect clean data, or permission denied errors when trying to write to log files.
Hot path code
Hot path code is the 20% of your code that takes up 80% of your processing time. This is the code that is worth optimizing. If you know any weird tricks to, for example, approximate an inverse square (like bit shifting by one and subtracting from a magic number), here is where you should use it. Add lots of comments so everyone knows what's going on, but use every trick to make this code faster.
Note: you should never simply assume that any particular code is in the hot path. You need to profile your running program to measure which code is actually taking most of execution time. And even if some code is in the hot path, if it's not causing a bottleneck, you still don't need to waste your time optimizing it.
Most of the normal rules go out the window with prototype code (also called "tracer bullets" by The Pragmatic Programmer). In most normal code you should avoid copy pasting, you should make neat and reusable functions, and you should separate your program into logical pieces. But with prototype code there is only one simple goal: evaluate your idea as fast as possible.
To write proper prototype code, you should do the minimum possible work to prove your point. You don't need to carefully consider your variable names. You shouldn't refactor the code to make it cleaner. You shouldn't put any effort at all into making it run fast (unless that's the point of your prototype). Most prototypes will result in a few hundred lines of horrible spaghetti code. And that's all totally fine because this code is temporary.
Once you have validated your idea, throw the prototype away. Prototypes are used to explore ideas. If you had a full understanding of the problem, you wouldn't need the prototype. And since you didn't have a full understanding of the problem, your prototype code is not going to be a good solution. It exists only to help you find the good solution. It should be completely rewritten once you have attained proper understanding.
Note: bosses typically like to suggest "cleaning up" the prototype to make it into a production ready system. Push back on this. If you want your code to be maintainable, then you need to rewrite, or at very minimum heavily refactor your prototype with a proper understanding of the problem.
UI code is any code that interacts with a user (and in some cases an external system beyond your control). This can be a browser, a terminal, a desktop application, or even an email processing hook.
You should assume that every possible incorrect thing will happen eventually. When you assume your UI is idiot proof, the universe will provide you with an improved idiot. Since human beings are not predictable, you should expect them to enter nonsense everywhere, and your UI code should gracefully handle this. If your web form clears all fields when a user enters an invalid phone number, you have failed to make a good UI.
Bad data is not only possible, but expected in the UI layer, so you should sanitize data early, not lose state if possible, and show clear error messages so the user can fix the data.
Boundary code sits between your messy UI or third party systems and your clean internal system. The UI layer attempts to correct and instruct when it encounters bad data The primary purpose of the boundary layer's is to serve as a data wall between the outside world and your system internals. Any data that gets past your boundary layer should be considered correct from then on in your program. Any data that is not correct should be rejected at the boundary layer.
For example, let's say that every user sign-up in your system requires a password. The UI should have a "required" password field, but you have no way of guaranteeing that a sign-up request actually passed through your UI code. A hacker could have constructed an HTTP request manually. In this case your boundary code should check to make sure the new user has password, and should reject the request when it finds that it should not. This way your system internals can safely assume that a password will always exist, and you avoid a lot of error handling code and potential bugs.
Once data has passed through your boundary layer, stop trying to validate data. You should still implement error handling, but this is not the place to check data types and data formats. If the data in your program is incorrect by the time you get into your internal system, then you have a bug in your boundary layer. Don't try to validate the data, just allow your program to crash so you can come back and fix it.
There's no point trying to check for every possible invalid data scenario at every layer of your code. You need to trust your boundary layer to correctly sanitize or reject bad data.
In UI code, errors should be handled gracefully. In internal code, you should trigger a crash as soon as something violates your expectations. If you make your code log a warning and continue, then you risk processing incorrect data, and you will most likely ignore the problem until it's too late. But if your program goes down, then you will treat the bad data like the bug it is and fix it as soon as possible.
Tests are a super valuable debugging tool and a great development tool too. You can use tests to catch errors before they enter a production system, but you can also use tests to help you develop a tricky module. If you write your tests before you write your actual code, you can use the tests to quickly validate the code whether the code you are writing is correct or not.
Personally, I think that testing code should be as straightforward as possible. It should be easy for you to visually confirm that the test logic is correct. Copy-pasting can be better than trying to create reusable modules with hidden functionality. If a module makes your test clearer, then go for it. But if copy-pasting 20 lines is easier to understand, do it. Whatever makes your testing code clearer.
Unit tests are used to validate that each individual module behaves the way that you expect. Your unit testing code should should run quickly. You should be able to comfortably run them before every commit commit. If they take too long, they will either break the programmer out of "the zone", or they will be skipped entirely. To ensure speed, external APIs and databases are typically mocked so that their responses (success and failure cases) are quick and predictable.
Integration tests broadly tests how the system behaves as a whole. These tests don't need to be run on every commit. Usually they will be run before or just after a new release. An example of an integration test is is running a headless browser to log into your web app and complete a purchase just like a real user would. An integration test does not need to be as speedy as a a unit test, but they should still run in less than 20 minutes or so, to ensure that you don't needlessly bottleneck your release process.
Unless you are writing code that needs to be 100% bulletproof, you don't need to have 100% test coverage. Validating modules where you aren't completely confident is a fine start (and a significant margin better than no tests at all). You can add tests when you change a particular piece of functionality to ensure that the new code behaves how you would expect, and you can add regression tests whenever you find a bug in your system, to ensure that the same type of bug does not happen again.
As a side note: make sure that you are comfortable with how to write tests. You don't need to write tests for everything, but you should know how to write them for when you need them.
Library code is basically any reusable module that can be run from multiple contexts. It is code that does some tasks on behalf of some other code.
Library code should be as side-effect free as possible. It should perform its agreed task, and nothing else. If it encounters a problem while doing its job, it should avoid assuming how the calling code would want to handle the error. Instead it should return an error or throw (if the error is expected possible outcome) or throw an exception (if some constraint has been violated). It should always pass control back to the calling code rather than make any assumptions.
For example: if you have a library that runs tests, test failures will be displayed differently in different contexts. If you run your tests from the command line, you will likely want to see the errors written to stderr. But if the tests are triggered as part of a web UI, the errors should be passed to the browser to be displayed to the user. If library called
console.log with every test failure, then the web UI scenario wouldn't work correctly. By passing errors back to the caller, library code behaves as a well behaved, compossible unit in a larger system.
Business logic code
The business logic of a program can be the most frequently modified code. It needs to be easy to understand, verify, and change. This type of code can quickly turn into spaghetti if you're not careful, so it is especially important to spend the extra time and keep it neat.
Business logic usually doesn't need to be blazing fast. You should validate this assumption with performance profiling if you notice that your program is slow. A general rule of thumb is to avoid spending any time optimizing until you have data that says you need to. And only optimize the code that actually matters.
Analytic code is tricky to do correctly. Usually executive types will use the results of this code to make high level business decisions, so it's very important that this code produces correct data.
When you are writing analytic (or reporting) code, you will occasionally see small inconsistencies in your reported results. Perhaps one view of your data gives you a slightly different value than another. Since digging into the cause of this difference can represent a lot of extra work, it's tempting to write it off as "close enough", and "probably nothing". But the small inconsistencies can be a symptom of an algorithm problem. Perhaps one view of the data groups your raw data by a certain field, and the other one does not. It's important to fix these problems early because "cached in" to the structure of the data/code, and the longer you wait, the more likely someone will make a decision based on bad data.
It can be hard to write good tests for analytic code since the test data you create will represent the data you expect to see in your program. Test data won't have the the same weird edge cases that your real data will have. If possible, use a snapshot of real data for your test code. You should manually validate that your analytic code (and it's test code) process the raw data snapshot as you expect.
It can be difficult to write analytic code that runs quickly (especially if you're generating your report with some complex SQL). But slow reports break a business mind's focus just as much as slow tests will break a programmer's focus. If possible, make your reports run quickly. If you can't make it fast, consider caching the results for quick load or precomputing the data at regular intervals.
The end bit
Instead of blinding following a set of rules when you write code, try to remember the specific constraints of the environment where your code is running. In some cases, certain constraints don't matter, and you shouldn't waste your time.