The other answers are correct as far as the coupling being at the data interface level. But they don't really answer the question posed in the title:
How do I manage shared models among many microservices?
Conventional microservice wisdom is to err on the side of coupling loosely; share as little code as possible.
A brief overview of the problems with sharing code directly can be found here. This excerpt hits on a few good points (emphasis mine):
David emphasizes that all code sharing will attach your services together via the shared code. Creating a single source of truth, adhering to the DRY principle within a single service will create internal coupling but causes no problem in a service with a single responsibility. In contrast, when crossing a boundary, even though some things look the same, they are in a different context and must be different, implemented by different code and using a different data store. David urges that no matter how similar things look, we must resist attaching them because that means we are coupling across boundaries and across different contexts, a direct path to a big ball of mud.
This ebook further illustrates why sharing code across microservices is generally bad, but through the lens of it being a necessary evil sometimes. It is sometimes pragmatic to share code carefully across services. I would recommend reading the "I Was Taught to Share" AntiPattern section in full for a better understanding of how/when to share code across microservices. For posterity, here are the approaches it discusses for sharing code in such cases where it is appropriate:
Shared Project: Using a shared project forms a compile-time binding between common source code that is located in a shared project and each service project. While this makes it easy to change and develop software, it is my least favorite sharing technique because it causes potential issues and surprises during runtime, making applications less robust. The main issue with the shared project technique is that of communication and control—it is difficult to know what shared modules changed and why, and also hard to control whether you want that particular change or not. Imagine being ready to release your microservice just to find out someone made a breaking change to a shared module, requiring you to change and retest your code prior to deployment.
Shared Library: A better approach if you have to share code is to use a shared library (e.g., .NET assembly or JAR file). This approach makes development more difficult because for each change made to a module in a shared library, the developer must first create the library, then restart the service, and then retest. However, the advantage of the shared library technique is that libraries can be versioned, providing better control over the deployment and runtime behavior of a service. If a change is made to a shared library and versioned, the service owner can make decisions about when to incorporate that change.
Replicated Code: A third technique that is common in a microservices architecture is to violate the don't-repeat-yourself (DRY) principle and replicate the shared module across all services needing that particular functionality. While the replication technique may seem risky, it avoids dependency sharing and preserves the bounded context of a service. Problems arise with this technique when the replicated module needs to be changed, particularly for a defect. In this case all services need to change. Therefore, this technique is only really useful for very stable shared modules that have little or no change.
Service Consolidation: A fourth technique that is sometimes possible is to use service consolidation. Let's say two or three services are all sharing some common code, and those common modules frequently change. Since all of the services must be tested and deployed with the common module change anyway, you might as well just consolidate the functionality into a single service, thereby removing the dependent library.
One word of advice regarding shared libraries—avoid combining all of your shared code into a single shared library like common.jar. Using a common library makes it difficult to know whether you need to incorporate the shared code and when. A better technique is to separate your shared libraries into ones that have context. For example, create context-based libraries like security.jar, persistence.jar, dateutils.jar, and so on. This separates code that doesn’t change often from code that changes frequently, making it easier to determine whether or not to incorporate the change right away and what the context of the change was.
Also, related: https://softwareengineering.stackexchange.com/a/366237/209046
Anecdotally, I've done both #2 and #3 before on different projects; by sharing code in a "microservices common" library (explicitly identified as an antipattern above), and by copy/pasting models from one service to another (though these weren't really "microservices") to be maintained separately. The most obvious benefit of #3 was that framework upgrades for each service could be done independently, with the drawback of having to maintain model code in two places (which wasn't so painful with only two services). Regarding #2, I distinctly remember that managing our "microservices common" package was the most painful part of development on that project.