• fruitycoder@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    43 minutes ago

    I really don’t understand why this gets so complicated. Like don’t we all just make a micro service when ever we make a job or proccess in a project, and then slap those in a namespace and behind a service on k8s when you want to expose it out to the cluster, and ingress for the world?

    If that gets complicated, then you refactor just like any other code base, and debug it like any other multi threaded app. Make sure actions are atomic. Functional programing helps a lot too, cause you know streaming vs stateful, but you can have stateful stuff in k8s.

  • jjjalljs@ttrpg.network
    link
    fedilink
    arrow-up
    6
    ·
    16 hours ago

    One of my jobs went to microservices. Not really sure why. They had daily active users in the thousands, maybe. But it meant we spent a lot of time on inter-service communication, plus local development and testing got a lot more complicated.

    But before that, it was a single API written in Go by an intern, so maybe it was an improvement.

  • NigelFrobisher@aussie.zone
    link
    fedilink
    arrow-up
    1
    ·
    11 hours ago

    I would build this microservice architecture out of a desire to do good, but through me it would wield a power too terrible to comprehend.

  • stupidcasey@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    17 hours ago

    Director: Blue, I need you to work on a patch for the suspension barrier, it seems to have a memory leak.

    Blue: I could if I wasn’t allocating so much time to holding together a patchy framework, if I drop it now the whole system will break, what we need to do is re-build the entire thing from scratch in a new entirely blue framework, if we did that we wouldn’t have to allocate so many resources to patch work.

    Director: that sounds like it would cost money…

    Blue: it would now but in the future it would save us so much more.

    Director: couldn’t you just work harder for less pay?

    Blue: no I literally couldn’t.

    Director: Green seems to be holding it together well.

    Blue: that is because Green is at the bottom of the stack, he doesn’t have to deal with it he makes it our problem.

    Director: I don’t know, sounds like a skill issue to me, no vacation time until it’s fixed.

    Blue: Like I even get a vacation anyway.

    • mcv@lemmy.zip
      link
      fedilink
      arrow-up
      5
      ·
      16 hours ago

      The system I’m working on is shit. The devs all know it, the users all complain about it. It needs to be fixed, and not only do I seem to be the person most driven to fix it, it turns out I’ve been hired explicitly to replace the shittiest part of it. So that’s actually pretty good, right?

      Except my PO doesn’t quite want me to do that yet. First, he needs me to shovel more shit onto it. Shit that’s going to be the first stuff that I’ll replace once I get to replace stuff, and then that stuff will be a lot easier, whereas now it would be a lot harder and too slow to be useable. But new features are more important than making this useable.

      He’s a nice guy, but he doesn’t get technical priorities, and priorities are the primary responsibility for a PO.

  • LyD@lemmy.ca
    link
    fedilink
    arrow-up
    6
    ·
    19 hours ago

    The architect is colouring all the balls

    The senior developer is arguing with the architect

    The junior developer is cannonballing somewhere in the middle

    • SleeplessCityLights@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      17 hours ago

      When I was a Sysadmin at a MSP, we had client with 2 main sites and multiple satellite sites. At one of the satellite locations there were two servers. The first ran a bunch of VMs and the second was the backup. If you disconnected the backup, the AD stopped working everywhere and half of the NAS storage was not reachable. As a far as anyone knew the second server was set to spin up replacement VMs if the first went down and nothing else. We were a pretty shitty MSP and never spent any time doing proactive work. So when that server dies, that company is going to have the most epic outage that will cost them a fortune.

  • peoplebeproblems@midwest.social
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    1 day ago

    You know, this really has me pondering my projects architecture. We have tiers of services.

    At the top, we have the UI. Then we have a “consumer” an “orchestra” and a “data” tier.

    Data is the tier that exclusively talks to databases. Orchestra talks to the multiple data services. A good chunk of business logic is here. Consumer uses the orchestra and handles UI requests.

    All it essentially does is split the monolith into 3 services at minimum. And since it’s on the cloud, there’s a start up cost where we need to spin up 3 machines instead of whatever you can do with microservices. What benefit do I get?

    • adminofoz@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      7
      ·
      20 hours ago

      Separation of concerns is a major benefit that shouldn’t be overlooked with security implications. Assuming you are properly restricting access to each worker node / “tier”, when one tier inevitably becomes compromised; it doesn’t result in the complete compromise of the entire monolith.

      • themaninblack@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        15 hours ago

        You have me thinking. My gut tells me this is true.

        For example, if you have a segmented auth service that someone gets root on, it’s possible for someone to act as anyone else, but not get the whole database if unavailable to all users.

        If your load balancer gets compromised, you could cause denial of service or act as a man-in-the-middle for all requests.

        If your database gets got, that’s the worst, but you generally can’t intercept web requests and other front-end facing things.

        But, I’d like to play devil’s advocate here. I feel that most of these segmented architecture strategies may have negative security implications as well.

        First, the overall attack surface increases. There are more redundant mechanisms, more links in the chain, probably more differing types of security/tokens/certificates that can get exploited. It also adds maintenance burden, which I believe reduces security because other priorities may get in the way if things are cumbersome.

        In my examples above, a compromise of the auth service in most cases pretty much means a complete compromise of the what your system allows its highest level users to do. Which is normally a lot.

        Getting a load balancer will allow an attacker to MITM if TLS termination happens there, and basically this can mean the same as in the auth service, plus XSS-type stuff.

        If the service hosting the database is compromised, it’s kinda game over. Including XSS.

        So what have we gained here?

        A monolith hosting all of these has more or less the same consequences if compromised. However, if it’s all together, it becomes everyone’s responsibility and there are more eyes on each aspect of your application. You’re more likely to update things that need updating. Traffic can be analysed a little easier.

        Just wanted to jot down some notes because I have a talk coming up and need to prepare for this question. Please prod my thinking, it would really help me out!

        • adminofoz@lemmy.cafe
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          11 hours ago

          By no means am I the microservices guy. Im more of a self hosted person than anything and used to always be a monolith guy and would still prefer that in many situations. But now I would at least “wrap” the monolith with supplemental self hosted microservices.

          But TLDR this is the logic as I understand it and the key thing. Dont cast your pearls before swine. Its basically biblical. Lol jk jk. But really put a cheap reverse proxy with a honey pot and some alerting… or even better a WAF and/or EDR then catch and isolate them when they compromise your front end and garbage honey pot before they can even move laterally internally.

          The longer slightly more technical answer is a malicious actor compromises one utility they likely made a lot of noise doing it which is key to securing the assets. First a lot of malicious activity can be mitigated with a proactive WAF. There are a few free solutions here Crowdsec WAF (ModSecurity, i think is another, working from memory could be wrong) has a decent signature detection and shared banned list. If you couple it with proper alerting you should be able to see, watch and isolate attackers in near real time. So even if they get the reverse Proxy and you messed up alerting on WAF, if you have layers of security, you still have your fall back EDR (like elk stack) alert for when proxyUser starts issuing ping commands and performing asset discovery. So you should see it days before they escalate privileges (unless 0 day or nation state etc).

          They will still do damage you are absolutely right. But let’s assume a tiered microservice approach for a functioning SAAS app where you have something like pocketbase for Auth, Umami for analytics, Stripe for payments and Postgres for paid api data. Even an issue in pocketbase / Auth doesn’t necessarily mean they get all your paid api data because hopefully you have per user rate limits on postgres and backend services (should your pocketbase user even be reading or writing to your paid data tables? Additionally alerting should provide observability into admin sign ins from new /suspicious locations, or multiple other suspicious behavior such as one user signing into multiple accounts, seeking priv escalation and so on.) But the main thing, they don’t get any cardholder data and that is a huge win. In fact if you are storing cardholder data PCI compliance requires segmentation.

          Additionally look at actual CVEs related to pocketbase and you will find a lot to do with OATH so in this case its simple. Disable OATH for best security. Put a WAF in front of your app using something like traefik with crowdsec or ModSecurity with an nginx reverse proxy to catch bad actors when they try to abuse your non existent OATH endpoint and ban them instantly. You catch a lot of bad actors with that trap.

          Or to take it back to your first example, if I have a segmented service that is compromised. I want to catch and isolate them before they even realize they are in a rootless podman container by taking advantages of the natural footguns that any script or malicious actor would naturally stumble into. For instance if my “reverseProxyUser” or any process in that entire container uses the sudo command that is a 10/10 fire type alert. That im pretty sure you could even automatically isolate or spin down with a few scripts, something like Argo or probably even off the shelf EDR.

          Is it perfect, no. Any determined actor will find a way into any system given enough time. But a layered approach like this is best in my opinion. Of course it needs modified for every system this is just one example.

          You can do the same thing with a monolith and good scripting. It isnt exclusive to microservices. Its just natively built that way in the instances that I am aware of thanks to the prominence of Kubernetes really. At least I think that’s why.

          Edit: i can’t type / got interrupted mid reply. Its half decent now.

    • NewDark@lemmings.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      23 hours ago

      If you aren’t a gigantic company with so many moving parts it would make your head spin… Probably not much? There is a benefit where you can individually scale your services based on need but that feels like overkill for most.

  • Dumhuvud@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    19 hours ago

    I’m so confused by the meme. What the hell is a “monolithic bug”? And what does DevOps have to do with software architecture?

    • SleeplessCityLights@programming.dev
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      17 hours ago

      Most DevOps spend our days designing and coding. We are deeper into architecture than anyone else, the dev team does not even consider deployment and uptime. We have to go to each separate teams meetings because our input is needed for many decisions. I had no idea that it was like this when I got into it.

      • themaninblack@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        16 hours ago

        Devs used to have to consider deployment and uptime! They still should. We as an industry became arbitrarily segmented and irresponsible. I have never gotten used to this tossing shit over the fence.

        • SleeplessCityLights@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          2 hours ago

          I look at it differently, everything used to be hobbled together messes without real consideration for running live. Then when you go to scale, you had to redo the whole thing because it’s base architecture was garbage. This is going to sound dumb, but the philosophy behind DevOps creates an environment that encourages building extensible systems that hopefully will not require taking a fucking sledge hammer to the systems to upgrade when you get users. My role in particular would require time from a dev from each team and a SysAdmin that understands software and OS at a low levell. That is really inefficient and has communication gaps.