![]() If I were designing a centralized asset management system then I'd definitely add support for git-style (i.e. I'm unfamiliar with P4's locking semantics when files are "locked" does that prohibit other users from even getting a copy of the centralized file, or merely prevent users from overwriting the centralized file on push/upload? How does branching work? > Locks are terrible, and yet you gotta doit sometimes, as how else would you prevent people working on the same asset. ![]() You can cache the npm ci result in a container layer for your CI/CD or use middleware like artifactory If you have a large enough team to invest in dev experience, there's way better ways to get the advantages of the article without the downsides. Running npm ci on everyone's machine is reproduceable, I don't know what OP is warning about. Eventually someone is going to have to run npm i. Nobody is safe from left-pad, not even Google, and committing your node_modules folder doesn't change that. NPM packages which install arch-specific binaries will constantly flip flop from commits by devs on different OS's. Which is time spent not writing features. You can't undo this without investing in smarter git tooling. And over time you won't be cloning just the head files you'll also be cloning every npm package binary ever committed. In fact, the node_modules folder will be bigger than your source folder almost immediately. Your CI will pay the time penalty during git clone instead of npm ci. Nobody working finger-to-feature has time for this.įor your average npm shop which doesn't have infinite internet oil money, here is why the article recommendations won't work for you. Unless of course you're Google-scale and can afford to contribute filesize fixes upstream, write fancy tooling to enforce commit-time workarounds, etc. ![]() Upsides from storing node_modules in repo are outweighed by the downsides. “Best practices” are all situational, and the only way to know if a practice is a good idea is to examine its tradeoffs in the context of your situation. I find its pain to be less frequent and more predictable than the pain that comes from not vendoring modules, though, so I put up with it. But when it comes up, it can be a pain in the butt. This is only an issue for modules that have a platform-specific build, which I try to avoid anyway. I have a little shell script that makes this easier. The way I deal with it (in Node) is by installing modules with -ignore-scripts, comitting the files, running “npm rebuild”, and then adding whatever shows up to. This can cause some pain if you’re in a multi-platform environment. Or just acknowledge that practices are contextual, and there’s no such thing as “best practice”-just a bunch of trade-offs.Ģ) “It doesn’t work well with platform-specific code” But you could also use one of the many techniques for managing the size of your repo. But I do update them regularly.)īut even if it does take a lot of space… so what? If your dependencies are genuinely so huge that this is a problem, then vendoring may not be right for you. ![]() (Granted, I’m fairly restrained in my use of dependencies. I have a 9-year old Node repo that I’ve been vendoring from the beginning and it’s only grown 200MB over that time. Vendoring dependencies is a simple way to ensure consistent build inputs, and has the bonus effect of decreasing build times.ĭon’t be so sure. Disappointed to see so many knee-jerk reactions to this.
0 Comments
Leave a Reply. |