• 1 Post
  • 171 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle
  • So you have local DNS set up?
    If you ping (or dig) speed.mydomain.local, does it resolve the same address as local_ip?
    Considering you are accessing local_ip:3000 and the domain on port 443, there is clearly a firewall somewhere redirecting packets or a reverse proxy on the domain but not on local_ip:3000

    Follow the port chain, forwarding, proxying etc. One of those will be bottlenecking. Then figure out why

    Edit:
    Just because your ISP speed is 100mbps and you are seeing 500mbps, doesn’t mean the connection isn’t hairpinning through your router via it’s public IP (as in, the traffic never leaves your router, but still goes through it)




  • Hmm, maybe I mean moral?
    Like, there is a correct way to go about something regardless of context.
    As opposed to doing something because of the context.

    Any exploit should be notified to the software/platform maintainers with a proper disclosure timeline to ensure it gets fixed in a timely way.
    That is the correct way.

    Abusing the shit out of a poorly implemented nazi government is the moral thing to do, but would go against a white hat’s ethics. Collectively a good thing to do, but not the correct thing to do as a white hat.

    Are gray hats more ethically and morally true?
    This is getting to deep for me.



  • Yeh, the difference between being high value (twitter) and an actual high value (government) target are entirely different. I bet many countries were salivating over the mere idea of these servers.

    I guess they will pass some laws about “hacking being illegal”, arrest some poor self-hosters that did nothing wrong, declare a victory, and change absolutely nothing - other than ruining people’s lives.

    I remember an article about a batch of compromised NICs from China that had backdoor firmware in them. You can harden your software system all you want, but when the literal hardware is backdoored, you are doomed.
    I think it was Supermicro. So am American company and not a small Mfr.
    I wonder if DOGE have reputable hardware, or if they cheapest out on servers.


  • Yeh, but they aren’t keeping control.
    They have been elected. They have 4 years.
    So far, it doesn’t seem that they have broken any laws or whatever, that would cause the system to reject their workings. They’ve rigged the courts, so the system is unlikely to reject their workings.
    I’d say it’s more of a constitutional coup. They are using loop holes to seize more power.
    I think it will be an attempted self-coup in 4 years.

    Regardless, it isn’t worth arguing about.
    It’s wrong. It’s a shit sandwich, the flavour of shit doesn’t matter.


  • Sorry for the wall of text.

    You would hope that a public front end is entirely isolated from critical systems.

    Hackers got in.
    Either they saw there was nothing of value, and figured they would embarrass the owners.
    They got in, saw shitloads of value, but decided the ethical thing was to embarrass as opposed to exfil/exploit/sell the access.
    Or the hackers were explicitly aiming to embarrass the owners, and didn’t explore scope beyond that.
    It’s likely “gay furry hackers” or similar, and it’s “grey hat” hacking.

    The ethical route, ie “white hat”, is to contact the owners about the exploit with a fixed period disclosure. Ie, “fix this in 30-90 days, or we will publish our method”.
    “Gray hat” are more like this. Where they find an exploit, it could go deeper, but they do some lulz instead. Basically make it obvious something has been hacked, but not actually exploit it further.
    “Black hat” would find the exploit (even if it was limited access) then sell it while trying to leave no trace, so it can be exploited again. Or straight up exploit it themselves.

    There is a possibility of foreign agents doing false-flag gray hat shit. Exfil sensitive data, cover their tracks, then “botch” some “hahaha you’ve been pwnd” stuff. Both getting sensitive data, and derailing the US government (because Musk has been authorised by Trump. It’s a huge undermining).

    With the timeline, this seems like gray hat, or black hat further exploited by gray hat. Or false flag.

    The obvious aim is to embarrass the owners.
    This casts serious political shade on the DOGE servers that have been hooked into government networks without oversight. Any further data exfil is a bonus to certain foreign countries.

    Best case scenario is that this is domestic gray hat, the muSSk team learn from it, and figure out how actual internet security works, and harden their systems accordingly.
    I mean, the actual best case is that this DOGE coup gets stopped. But the president has authorised DOGE, so this is what America wants. So, not a coup.

    Ideally, this hack has 0 actual scope of security vulnerability.
    Other than the “yeh, but if they can get into your public web server (something expected to be hardened as fuck, and might as well be static file hosting. Seriously, why is there a database for this shit), how can we trust your servers on government networks”.
    But chances are the exploits to get into this server will be similar to the exploits to get into the government connected DOGE systems. Unless the sysadmin & network admins (god bless them) have managed to maintain some control that muSSk doesn’t understand, and are able to mitigate the tsunami of access such a compromised server might unleash.


  • I’m currently reconsidering using a couple mikrotik for some layer 3 hardware offloading.
    Not really homelab, but close.

    I have a project that gets integrated with another network for an event. I’m thinking of using 2x crs504 (cause I’m using mlag for servers, think vrrp or whatever for “public” (it’s all internal) ip) and seeing if I can get l3hw working as a router.
    While I could sit on a subnet of the “host” network, having a gateway that traffic goes through allows me to test and prove everything for my system in my homelab, with just the final integration being a do-in-a-time-crunch problem.
    I’m already using the crs504s for networking (I bought them ages ago, thinking 25gbps was going to be as easy as 10gbps. It’s all running at 10gbps), and this saves having to use something as a router, cuts down on rack space, all sorts of benefits. I think.
    Anyone have any experience with mikrotik l3hw offloading?

    My actual homeland is just a NAS and some networking. It’s a small flat, it’s just me. Not complicated, no need to give me more headaches!





  • If it’s a break in the middle of the fibre, then they will use an epoxy housing for the splice.
    I don’t know the specifics, but something like this:
    Cut/clean up the break, put through an epoxy housing and tighten the cable grips. Strip back the protective layers, clean cut the fibres and splice them all appropriately. Carefully stuff it inside the epoxy housing, fill with epoxy and let it set. Then burry/rig it again.
    Those are what the large plastic cylinder things you see on cables are.
    Similar housings are used for splicing copper (both data and high voltage) cables that have to withstand elements/burying, just the size (and possibly internals, epoxy type etc) change.
    Black plastic cylinder that’s larger than the cable, with a couple cables coming out? Probably a splice point


  • They just cut it roughly, strip back the protective layers, then do a very precise and clean cut on the actual fibre and polish the end.
    Most of the time it will get spliced into a patch panel (instead of being installed into the patch panel). At which point the cleanly cut fibre is precisely aligned with the fibre from the patch panel, then melted together.
    It’s very precise. Splicing tools often use extremely high magnification, and very precise actuators to align the 2 fibre ends before they are fused


  • At scale, it can be considerably cheaper.
    Limit data access according to security policy and some basic filtering from the request. It’s not a huge amount of processing for an API server to do.
    Web pages, desktop app, mobile apps, other servers can all use it to access the data.
    Template rendering is then done on the client side. So processing for that is done on the client, saving a lot of compute cost - meaning the servers can respond to more API requests.
    Data transferred is lower as well. A template that gets populated by the client using data from an API request will be overall smaller than the full template rendered server side.
    The client apps can then be entirely managed separately from the server apps, without having to be tightly integrated. This allows the front end team to do what they want, and the backend team do what they want - as long as the API endpoints are correct.

    For most things, an SPA isn’t required or even desirable (which is why server side rendering of SPAs is a thing).
    But SPAs should give a better experience to users, and can be easier to build.



  • Default config is defined in the firmware. It can’t be deleted or changed (well, easily. I think there is a reseller option to have a custom default config).
    The “no default config” means the default config will not be applied after the reset.
    If you reset it again without checking “no default config”, then the default config will be applied.

    “No default config” is very useful for applying your own config script. It gives you a blank canvas, making scripting a lot easier!

    I have my “config.rsc” file that has the required configuration. And I have a “reset.auto.rsc” file that only has the command to reset the mikrotik with no defaults and to run the “config.rsc” script after reset.
    “filename.auto.rsc” will be executed as soon as it gets FTPd (it’s a feature of mikrotik).
    I use a bash script that FTPs the config.rsc file to the mikrotik, then the reset.auto.rsc file.
    Makes it trivial to tweak the config then apply it, and I get all the config for the devices in easy to edit/diff script files.