• 0 Posts
  • 16 Comments
Joined 6 months ago
cake
Cake day: December 27th, 2023

help-circle

  • smb@lemmy.mlto196@lemmy.blahaj.zoneRule
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 months ago

    i think it was not the whole hull but one of the materials, the hull was made of that had expired. well, carbon fibre has its strenght when pulled, but when pushing it bends. but if one uses resin on the fibre, then it gets some strenght when pushed too. similar to steel and concrete, while steel can really be pulled a lot, concrete is way better when pushed than steel. steel is quite stable when pushed too, but thats not its main strength. i think the resin was what really held the pressure in the sub, not the carbon fibre, but with this i only have that dangerous type of half-knowledge i’ld have to bring to expert level before doing something stupid (like depending on that to be fully true without really knowing).

    in general things often last longer than their expected “minimum” to be used without concern. but in practice one would have to test for damage or if its worn out (like its done with airplane parts at fixed intervals) even without using materials of bad quality. but that was AFAIK what oceangate’s management decided to explicitly NOT check the sub for - despite internal demands to do so.

    i would not say its not possible to build a secure pressure hull out of carbon fibre, or out of carbon fibre of not the best quality, or a hull of a different shape than a sphere, or a hull out of different materials with different bending behaviors under pressure, or when such components are “glued” together on the edges that do the different bending, but ALL of this at the same time and without even checking at least after a new maximum depth was reached? not to mention crackling sounds after which heared one would want to double check. Even the wright brothers seemed more cautious to me.

    today one would at least get some wear level statistics with unmanned vehicles in a slightly deeper than intended depth to have security margins and afterwards throughout checks for the parts that are important, single points of failures or are one of the proudly new developed.




  • i am happy to have a raspberry pi setup connected to a VLAN switch, internet is behind a modem (like bridged mode) connected with ethernet to one switchport while the raspi routes everything through one tagged physical GB switchport. the setup works fine with two raspi’s and failover without tcp disconnections during an actual failover, only few seconds delay when that happens, so basically voip calls recover after seconds, streaming is not affected, while in a game a second off might be too much already, however as such hardware failures happen rarely, i am running only one of them anyway.

    for firewall i am using shorewall, while for some special routing i also use unbound dns resolver (one can easily configure static results for any record) and haproxy with sni inspection for specific https routing for the rather specialized setup i have.

    my wifi is done by an openwrt but i only use it for having separate wifis bridged to their own vlans.

    thus this setup allows for multi-zone networks at home like a wifi for visitors with daily changing passwords and another fror chromecast or home automation, each with their own rules, hardware redundancy, special tweaking, everything that runs on gnu/linux is possible including pihole, wireguard, ddns solutions, traffic statistics, traffic shaping/QOS, traffic dumps or even SSL interception if you really want to import your own CA into your phone and see what data your phones apps (those that don’t use certificate pinning) are transfering when calling home, and much more.

    however regarding ddns it sometimes feels more safe and reliable to have a somehow reserved IP that would not change. some providers offer rather cheap tunnels for this purpose. i once had a free (ipv6) tunnel at hurricane electronic (besides another one for IPv4) but now i use VMs in data centers.

    i do not see any ready product to be that flexible. however to me the best ready router system seems to be openwrt, you are not bound to a hardware vendor, get security updates longer than with any commercial product, can 1:1 copy your config to a new device even if the hardware changes and has the possibility to add packages with special features to it.

    “openwrt” is IMHO the most flexible ready solution for longtime use. same as “pfsense” is also very worth looking at and has some similarities to openwrt while beeing different.



  • smb@lemmy.mltoLinux@lemmy.mlBtw
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    6
    ·
    4 months ago

    woman would take care for a literal horse instead of going to therapy. i don’t see anything wrong there either.

    just a horse is way more expensive, cannot be put aside for a week on vacations (could a notebook be put aside?) and one cannot make backups of horses or carry them with you when visiting friends. Horses are way more cute, though.



  • sorry if i might repeat someones answer, i did not read everything.

    it seems you want it for “work” that assumes that stability and maybe something like LTS is dort of the way to go. This also assumes older but stable packages. maybe better choose a distro that separates new features from bugfixes, this removes most of the hassle that comes with rolling release (like every single bugfix comes with two more new bugs, one removal/incompatible change of a feature that you relied on and at least one feature that cripples stability or performance whilst you cannot deactivate it… yet…)

    likely there is at least some software you most likely want to update out of regular package repos, like i did for years with chromium, firefox and thunderbird using some shellscript that compared current version with latest remote to download and unpack it if needed.

    however maybe some things NEED a newer system than you currently have, thus if you need such software, maybe consider to run something in VMs maybe using ssh and X11 forwarding (oh my, i still don’t use/need wayland *haha)

    as for me, i like to have some things shared anyway like my emails on an IMAP store accessible from my mobile devices and some files synced across devices using nextcloud. maybe think outside the box from the beginning. no arch-like OS gives you the stability that the already years-long-hung things like debian redhat/centos offer, but be aware that some OSes might suddenly change to rolling release (like centos i believe) or include rolling-release software made by third parties without respecting their own rules about unstable/testing/stable branches and thus might cripple their stability by such decisions. better stay up to date if what you update to really is what you want.

    but for stability (like at work) there is nothing more practical than ancient packages that still get security fixes.

    roundabout the last 15 years or more i only reinstalled my workstation or laptop for:

    • hardware problems, mostly aged disk like ssd wearlevel down (while recovery from backup or direct syncing is not reinstalling right?)
    • OS becomes EOL. thats it.

    if you choose to run servers and services like imap and/or nextcloud, there is some gain in quickly switching the workstation without having to clone/copy everything but only place some configs there and you’re done.

    A multi-OS setup is more likely to cover “all” needs while tools like x2vnc exist and can be very handy then, i nearly forgot that i was working on two very different systems, when i had such a setup.

    I would suggest to make recovery easy, maybe put everything on a raid1 and make sure you have on offsite and an offline backup with snapshots, so in case of something breaks you just need to replace hardware. thats the stability i want for the tools i work with at least.

    if you want to use a rolling release OS for something work related i would suggest to make sure no one externally (their repo, package manager etc) could ever prevent you from reinstalling that exact version you had at that exact point in time (snapshots from repos install media etc). then put everything in something like ansible and try out that reapplying old snapshots is straight forward for you, then (and not earlier) i would suggest that those OSes are ok for something you consider to be as important as “work”. i tried arch linux at a time when they already stopped supporting the old installer while the “new” installer wasn’t yet ready at all for use, thus i never really got into longterm use of archlinux for something i rely on, bcause i could’nt even install the second machine with the then broken install procedure *haha

    i believe one should consider to NOT tinker too much on the workstation. having to fix something you personally broke “before” beeing able to work on sth important is the opposite of awesome. better have a second machine instead, swappable harddrive or use VMs.

    The exact OS is IMHO not important, i personally use devuan as it is not affected by some instability annoyances that are present in ubuntu and probably some more distros that use that same software. at work we monitor some of those bugs of that software. within ubuntu cause it creates extra hassle and we workaround those so its mostly just a buggy annoying thing visible in monitoring.


  • smb@lemmy.mlto196@lemmy.blahaj.zonerule
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 months ago

    oleep was a pretty effective habit, we should have kept that as it removed all annoyances of aleep and bleep, while had all blessings that came with cleep, dleep… up to nleep, but some noobs wanting to play adult, created first pleep, then qleep wich was addictive and destructive preventing people from going back, not even to pleep. Then intentional enshittification was added and only sleep somehow was sort of a still acceptable solution but taking up like a third of the whole day while giving just as much relaxation to really “survive” the first two hours of a working day from which then the overall 90% of success at work comes from.

    i miss the times aeons ago when oleep was still common, people were happier, friendlier, more productive and overall healthier.

    cheers o_O 8-)



  • smb@lemmy.mltoLinux@lemmy.mlWhen do I actually need a firewall?
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    5 months ago

    so here are some reasons for having a firewall on a computer, i did not read in the thread (could have missed them) i have already written this but then lost the text again before it was saved :( so here a compact version:

    • having a second layer of defence, to prevent some of the direct impact of i.e. supply chain attacks like “upgrading” to an malicously manipulated version.
    • control things tightly and report strange behaviour as an early warning sign ‘if’ something happens, no matter if attacks or bugs.
    • learn how to tighten security and know better what to do in case you need it some day.
    • sleep more comfortable when knowing what you have done or prevented
    • compliance to some laws or customers buzzword matching whishes
    • the fun to do because you can
    • getting in touch with real life side quests, that you would never be aware of if you did not actively practiced by hardening your system.

    one side quest example i stumbled upon: imagine an attacker has ccompromised the vendor of a software you use on your machine. this software connects to some port eventually, but pings the target first before doing so (whatever! you say). from time to time the ping does not go to the correct 11.22.33.44 of the service (weather app maybe) but to 0.11.22.33 looks like a bug you say, never mind.

    could be something different. pinging an IP that does not exist ensures that the connection tracking of your router keeps the entry until it expires, opening a time window that is much easier to hit even if clocks are a bit out of sync.

    also as the attacker knows the IP that gets pinged (but its an outbound connection to an unreachable IP you say what could go wrong?)

    lets assume the attacker knows the external IP of your router by other means (i.e. you’ve send an email to the attacker and your freemail provider hands over your external router address to him inside of an email received header, or the manipulated software updates an dyndns address, or the attacker just guesses your router has an address of your providers dial up range, no matter what.)

    so the attacker knows when and from where (or what range) you will ping an unreachable IP address in exact what timeframe (the software running from cron, or in user space and pings at exact timeframes to the “buggy” IP address) Then within that timeframe the attacker sends you an icmp unreachable packet to your routers external address, and puts the known buggy IP in the payload as the address that is unreachable. the router machtes the payload of the package, recognizes it is related to the known connection tracking entry and forwards the icmp unreachable to your workstation which in turn gives your application the information that the IP address of the attacker informs you that the buggy IP 0.11.22.33 cannot be reached by him. as the source IP of that packet is the IP of the attacker, that software can then open a TCP connection to that IP on port 443 and follow the instructions the attacker sends to it. Sure the attacker needs that backdoor already to exist and run on your workstation, and to know or guess your external IP address, but the actual behaviour of the software looks like normal, a bit buggy maybe, but there are exactly no informations within the software where the command and control server would be, only that it would respond to the icmp unreachable packet it would eventually receive. all connections are outgoing, but the attacker “connects” to his backdoor on your workstation through your NAT “Firewall” as if it did not exist while hiding the backdoor behind an occasional ping to an address that does not respond, either because the IP does not exist, or because it cannot respond due to DDos attack on the 100% sane IP that actually belongs to the service the App legitimately connects to or to a maintenance window, the provider of the manipulated software officially announces. the attacker just needs the IP to not respond or slooowly to increase the timeframe of connecting to his backdoor on your workstation before your router deletes the connectiin tracking entry of that unlucky ping.

    if you don’t understand how that example works, that is absolutely normal and i might be bad in explaining too. thinking out of the box around corners that only sometimes are corners to think around and only under very specific circumstances that could happen by chance, or could be directly or indirectly under control of the attacker while only revealing the attackers location in the exact moment of connection is not an easy task and can really destroy the feeling of achievable security (aka believe to have some “control”) but this is not a common attack vector, only maybe an advanced one.

    sometimes side quests can be more “informative” than the main course ;-) so i would put that (“learn more”, not the example above) as the main good reason to install a firewall and other security measures on your pc even if you’ld think you’re okay without it.


  • This is most likely a result of my original post being too vague – which is, of course, entirely my fault.

    Never mind, and i got distracted and carried away a bit from your question by the course the messages had taken

    What is your example in response to?

    i thought it could possibly help clarifying something, sort of it did i guess.

    Are you referring to an application layer firewall like, for example, OpenSnitch?

    no, i do not conside a proxy like squid to be an “application level firewall” (but i fon’t know opensnitch however), i would just limit outbound connections to some fqdn’s per authenticated client and ensure the connection only goes to where the fqdns actually point to. like an atracker could create a weather applet that “needs” https access to f.oreca.st, but implements a backdoor that silently connects to a static ip using https. with such a proxy, f.oreca.st would be available to the applet, but the other ip not as it is not included in the acl, neither as fqdn nor as an ip. if you like to say this is an application layer firewall ok, but i dont think so, its just a proxy with acls to me that only checks for allowed destination and if the response has some http headers (like 200 ok) but not really more. yet it can make it harder for some attackers to gain the control they are after ;-)


  • But the point that I was trying to make was that that would then also block you from using SSH. If you want to connect to any external service, you need to open a port for it, and if there’s an open port, then there’s a opening for unintended escape.

    now i have the feeling as if there might be a misunderstanding of what “ports” are and what an “open” port actually is. Or i just dont get what you want. i am not on your server/workstation thus i cannot even try to connect TO an external service “from” your machine. i can do so from MY machine to other machines as i like and if those allow me, but you cannot do anything against that unless that other machine happens to be actually yours (or you own a router that happens to be on my path to where i connect to)

    lets try something. your machine A has ssh service running my machine B has ssh and another machine C has ssh.

    users on the machines are a b c , the machine letters but in small. what should be possible and what not? like: “a can connect to B using ssh” “a can not connect to C using ssh (forbidden by A)” “a can not connect to C using ssh (forbidden by C)” […]

    so what is your scenario? what do you want to prevent?

    I don’t fully understand what this is trying to accomplish.

    accomplish control (allow/block/report) over who or what on my machine can connect to the outside world (using http/s) and to exactly where, but independant of ip addresses but using domains to allow or deny on a per user/application + domain combonation while not having to update ip based rules that could quickly outdate anyway.



  • you do not need to know the source ports for filtering outgoing connections.

    (i usually use “shorewall” as a nice and handy wrapper around iptables and a “reject everything else policy” when i configured everything as i wanted. so i only occasionally use iptables directly, if my examples dont work, i simply might be wrong with the exact syntax)

    something like:

    iptables -I OUTPUT -p tcp --dport 22 -j REJECT

    should prevent all new tcp connection TO ssh ports on other servers when initiated locally (the forward chain is again another story)

    so … one could run an http/s proxy under a specific user account, block all outgoing connections except those of that proxy (i.e. squid) then every program that wants to connect somewhere using direct ip connections would have to use that proxy.

    better try this first on a VM on your workstation, not your server in a datacenter:

    iptables -I OUTPUT -j REJECT iptables -I OUTPUT -p tcp -m owner --owner squiduser -j ACCEPT

    “-I” inserts at the beginning, so that the second -I actually becomes the first rule in that chain allowing tcp for the linux user named “squiduser” while the very next would be the reject everything rule.

    here i also assume “squiduser” exists, and hope i recall the syntax for owner match correctly.

    then create user accounts within squid for all applications (that support using proxies) with precise acl’s to where (the fqdn’s) these squid-users are allowed to connect to.

    there are possibilities to intercept regular tcp/http connections and “force” them to go through the http proxy, but if it comes to https and not-already-known domains the programs would connect to, things become way more complicated (search for “ssl interception”) like the client program/system needs to trust “your own” CA first.

    so the concept is to disallow everything by iptables, then allow more finegrained by http proxy where the proxy users would have to authenticate first. this way your weather desktop applet may connect to w.foreca.st if configured, but not e.vili.sh as that would not be included in its users acl.

    this setup, would not prevent everything applications could do to connect to the outside world: a local configured email server could probably be abused or even DNS would still be available to evil applications to “transmit” data to their home servers, but thats a different story and abuse of your resolver or forwarder, not the tcp stack then. there exists a library to tunnel tcp streams through dns requests and their answers, a bit creepy, but possible and already prepaired. and only using a http-only proxy does not prevent tcp streams like ssh, i think a simple tcp-through-http-proxy-tunnel software was called “corckscrew” or similar and would go straight through a http proxy but would need the other ond of the tunnel software to be up and running.

    much could be abused by malicious software if they get executed on your computer, but in general preventing simple outgoing connections is possible and more or less easy depending on what you want to achieve


  • you can copy your system live, but that would involve other tools than dd too.

    with dd when copying the whole device (instead of just partitions) everything gets cloned. This includes uuids, labes, lvm devices with the names of their lv and vg names and raid devices in case you have any. all of these (c|w)ould collide unless the original disk was taken out or either the new or old disks labrls uuids etc are previously to the boot changed to prevent collusions or accidently mounting/booting the original partitions. also if (!) you use device names i.e. in fstab, crypttab, scripts or such, like with the uuids things could break. also you might have to take action for your bios to actually boot from the stick. most people disable usb boot on notebooks for security reasons.

    using dd, cloning the full disk to the full stick, then removing the original disk + set bios boot setting might work out of the box, i’ld try that first as it takes only the effort to boot from another os to do the dd-copy offline (preventing filesystem damage while copying).

    a live copy could be done by cloning only the partition layout and bootloader, then setting up new filesystems (with new uuids) and new lvm group/volumes etc if any, copying original disk using rsync then (maybe “bind” mounting to separate partitions if needed), then adjusting boot config to match new uuids/labels. This could be done while running the system to be copied, but of course even running rsync twice might lack some updates of currently open files by sth like desktop programs or logfiles.

    Without knowing the exact setup, only limited answers can be given, but you have to make sure the boot process will work, so at least the boot loader (grub?) and its files will be needed, which -at least in the past and for old lilo/grub- could not reside at some position on the disk after some “high value” like some number GBs. if that limitation is still there, your new exact partition layout on the usb stick might be relevant for success, but try/error should give you the hints you need.

    you might use “language models” for getting hints, but they are language models, not friends, their “solution” might break your system and delete your data, and they are trained to say they are sorry afterwards, but the are’nt sorry, its just a sequenze of probabilities and words to them not more.

    So always only work on data that has been backed up and prooven to be suitable for you to recover everything you need from scratch, no matter if friends, language models or lemmy users assist you ;-)

    UPDATE: just learned that batocera is “designed” to be just copied to usb stick and run from there, so it will most likely already include everything you need. best is to follow their instructions how to create the usb stick to boot from. if you already have it running from partition, you most likely can copy your current data using rsync. but beware, if you have two copies with the same uuids (partition +usb) that might not work as expected.


  • As i see it, the term “firewall” was originally the neat name for an overall security concept for your systems privacy/integrity/security. Thus physical security is (or can be) as well part of a firewall concept as maybe training of users. The keys of your server rooms door could be part of that concept too.

    In general you only “need” to secure something that actually is there, you won’t build a safe into the wall and hide it with an old painting without something to put in it or - could be part of the concept - an alarmsensor that triggers when that old painting is moved, thus creating sort of a honeypot.

    if and what types of security you want is up to you (so don’t blame others if you made bad decisions).

    but as a general rule out of practice i would say it is wise to always have two layers of defence. and always try to prepare for one “error” at a time and try to solve it quickly then.

    example: if you want an rsync server on an internet facing machine to only be accessible for some subnets, i would suggest you add iptables rules as tight as possible and also configure the service to reject access from all other than the wanted addresses. also consider monitoring both, maybe using two different approaches: monitor the config to be as defined as well as setup an access-check from one of the unwanted, excluded addresses that fires an alarm when access becomes possible.

    this would not only prevent those unwanted access from happening but also prevent accidental opening or breaking of config from happen unnoticed.

    here the same, if you want monitoring is also up to you and your concept of security, as is with redundancy.

    In general i would suggest to setup an ip filtering “firewall” if you have ip forwarding activated for some reason. a rather tight filtering would maybe only allow what you really need, while DROPping all other requests, but sometimes icmp comes in handy, so maybe you want ping or MTU discovery to actually work. always depends on what you have and how strong you want to protect it from what with what effort. a generic ip filter to only allow outgoing connections on a single workstation may be a good idea as second layer of “defence” in case your router has hidden vendor backdoors that either the vendor sold or someone else simply discovered. Disallowing all that might-be-usable-for-some-users-default-on-protocols like avahi & co in some distros would probably help a bit then.

    so there is no generic fault-proof rule of thumb…

    to number 5.: what sort of “not trusting” the software? might, has or “will” have: a. security flaws in code b. insecurity by design c. backdoors by gov, vendor or distributor d. spy functionality e. annoying ads as soon as it has internet connection f. all of the above (now guess the likely vendors for this one)

    for c d and e one might also want to filter some outgoing connection…

    one could also use an ip filtering firewall to keep logs small by disallowing those who obviously have intentions you dislike (fail2ban i.e.)

    so maybe create a concept first and ask how to achieve the desired precautions then. or just start with your idea of the firewall and dig into some of the appearing rabbit holes afterwards ;-)

    regards