-3

I have a build pipeline that builds my C++ project on Windows, macOS and Linux. The build process generates 100 libraries and files on each OS. So I have a directory with these files, and I want to package them proberly.

File foo.dylib goes into osx.zip.

File bar.dll goes into win.zip

Is there a standard format for this or do I have to invent my own? If I have to invent my own, I thought of describing this in an inverted gitignore-style file format, combined with a C-preprocessor-like solution. It looks very maintainable and would do a good job.

#if OSX *.dylib !not_me.dylib #else WIN *.dll !not_me.dll #endif 

The focus should be readability and maintainability. Makefiles have elements for this, but they are not easy to read and with the amount of files I have they are difficult to maintain. For unit tests there is for example the JUnit file format. Is there a similar solution for packaging?

6
  • 1
    I'm not entirely clear what you're trying to describe, but it sounds a bit like a Makefile, or one of its many successors and competitors. Commented May 2, 2020 at 16:44
  • The problem with Makefiles is, they are very difficult to read and maintain and can get quite messy easily. Do you have a "competitior" or "successor" solution which is as easy readable as my suggestion above? Commented May 2, 2020 at 16:46
  • 2
    I feel like that should probably be part of the question. "Is there a standard format for this or do I have to invent my own?" is a rather different question from "Is there a format like X but without disadvantage Y?" Wikipedia has quite a long list of build tools. Commented May 2, 2020 at 16:50
  • Very good point! I edited my question Commented May 2, 2020 at 16:55
  • 1
    Package them properly for what? Versioning? Archiving? Deployment? Commented May 3, 2020 at 20:00

1 Answer 1

1
+100

If I interpret your question correcly, how you work with the artifacts depends on your build platform. It sounds like you are currently building from your development environment or locally. If you move to a build server (which could indeed be local too), it probably has some kind of artifact management that takes the result of the build to storage, where it's usually called an artifact. Later in your release pipeline/flow, you reference and use the artifact of a specific build pipeline and build version.

In my experience, how you format the contents of the artifact depends on how it should look when installed on the end user system, or if the artifact should be installed through some kind of store, how the store wants the structure. If you are deploying a web site to a webserver, the structure could be very similar to how you want it to look in the server. If you deploy an ipa, just store the ipa and the other resouces required. If you deploy a setup.exe, just store that. If you are to update a sql server, store scripts in a way that makes it easy for you to use them in the deploy stage.

Depending on your build server, a build results in a single artifact or multiple artifact. Depending on that, the artifact could either be containing the resources used for a single installation, or multiple folders containing resources for multiple installations. In the latter case, it's nice with a deployment pipeline that supports partial artifact downloads.

If you don't use a build server and want to keep it local, then aim to make it easy for you to find your consecutive releases and to use their contents. Think about the next step; what do you need to do with the result of the build in order to have it running on the target system.

TLDR;

  1. Source structure format
  2. Build
  3. End user structure format/installation structure format/store deployment format...
  4. Store as artifact
  5. Deploy from artifact (to user/server/store etc)

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.