What you describe is a code smell.
You should not ever need to unit test a complex idea. Unit tests are for small ideas. The test at the smallest levels, a single unit. If a single unit of your code is complex, it is a good sign you need to stop what you are doing and figure out a better approach.
An excellent book on the subject is "xUnit Test Patterns". It is a large tome, but it breaks down many different common patterns you can use. Also there are many good resources out on the Internet to learn about Test Driven Development.
There are many possible issues, and not knowing the code, we can only guess at what your issues are. You may have violated the Single Responsibility Rule for your class or your function; a class should have a single responsibility and a function should do only a single task of that single responsibility. You may have too tight of coupling between classes. You may need to separate steps of logic into smaller internal units. You may have a poorly written interface that requires too much external configuration. You may not be writing unit tests at all, but be writing a different type of tests.
Of all of those, writing automated tests other than unit tests and calling them unit tests is the most frequent culprit for beginners.
If you really do need to do something complex then it probably is not a unit test. A unit test is to verify the smallest unit of work, hence the name: you are testing a singleunit of work. A unit test should run nearly instantly. A unit test should not touch any other systems, and if a function needs to know about another systems those should be handled with mock objects, fake objects, proxies, or other similar alternative. If running a test means the system under test touches a file system, a database, a network, or any other hardware, then that test is not a unit test. Typically a unit test is a very short set of 2-3 lines setting up parameters to pass to a single function, then
the single function call being tested[\b], then a set of assertions about that function call. They should be as close to instant as is possible, measuring tens of thousands of unit tests run per second. If a test takes microseconds or longer to run, it is far too slow to be a proper unit test.
Generally component tests are able to run a small system through its paces by making many calls within the system to exercise the component as a whole, but still use mocks and fake objects for other layers. These should be a little slower than instant. If it takes more than a few seconds to run a suite of component tests on a component, it is generally too long. Component tests should not cross any boundaries, not touch networks or file systems or otherwise hit anything other than the component.
Integration tests tend to be the slow beasts of automated tests. They hit the outer edge of a system and trigger all the real work to be run. They can be written to hit disks and networks and other hardware, since their job is to ensure the end-to-end functionality, and consequently generally take many minutes to run, or on large mature code bases, several tens of minutes to run.
Tests are code. Learning to write good tests is a skill that needs to be developed and honed, just like learning to write any other kind of code requires effort to learn and improve at.
And that's why I haven't described what those 2 bugs are; to demonstrate how hard/unreliable it is to think we can predict every mistake in an algorithm.
Nobody expects that. That is generally not what unit tests are for.
Tests can have many purposes.
The most typical purpose is to ensure permanence, or in other words, to ensure that if somebody maintaining the code modifies its expected behavior the tests will fail. In longer-term development this is an amazing thing. It allows people to modify the underlying code, to swap out functionality with faster internal details, and the collection of tests will show that it behaves the same way. There may have been defects in the existing code, and some of those defects may even have tests written to ensure the behavior remains exactly the same.
Next, tests generally are written as an example of how to use the code. In test-driven development the developer must first establish how the interface is going to be used, then fills in the detail of what it does. When code is mindfully-written this helps ensure the interface is easy to use rather than coding up a system and discovering in practice it is difficult to use and needs to be rewritten on day zero.
Ensuring correctness is a common purpose of unit tests, but as you point out, we all know some cases will slip through. Well-written tests will test and catch many of these. Good practices mean that the developer initially writes a few ideal path and failure path tests that cover the obvious things they are thinking about, then over time when a defect is discovered it will result in a new test that exposes the defect, then the defect gets corrected and the test is modified to become new ideal-path test... except this time the test starts with a comment linking it to a bug number.
Over time, as the code becomes mature and gets maintained for years, there will be an ever-growing suite of tests that ensure permanence of the correct behavior. The tests will automatically regress the thousands of bugs in the bug database every time the code is built, and if someone makes a change that would re-introduce the bug the test will fail because it broke the permanence requirement.