I’m here for awhile

  • 3 Posts
  • 34 Comments
Joined 3 months ago
cake
Cake day: October 3rd, 2025

help-circle
  • First, thanks for replying I appreciate the feedback and thoughtful replys.

    If your social media instance has 1000 users on it, and one user gets compromised, then the other 999 users shouldn’t have any interactions outside of that user leaked.

    If I intended on using this for mission critical communications or something, maybe I would add and enforce two factor authenticated logins. That could mitigate this conern to some extent. Or use tors built in authenticated onion service mechanism and distribute the certificate to users. This thing was never intended to scale to that size though.

    But this is pretty much the case for any platform yeah? If you gain access you gain access?

    Users that did not allow their posts to be shared with the compromised account would remain private, and conversations outside of the compromised account would remain private. AND, let’s say you had a hint that a account was compromised and you were using web crypto. Resetting your password would break the encryption of all conversation history… OR anyone engaged in a sensitive conversation could remotely wipe their conversations.

    Are file uploads encrypted?

    File uploads are encrypted in transit from the client to the server but not encrypted on the server. Anyone needing anything further would already know how to encrypt a file and can handle that manually. It’s a heavy operation is the main reason. My use case is to send a pdf of a already public news article or something so I didn’t feel implementing was important.

    But if I may flip the question… Why does an inaccessible post even need to return 403 anyway? It just functions as a big footgun that may cause any other exploits to behave worse.

    That’s a fair question. I could see how it could be used to test to probe the server or something. The thing is, you would only get that different 403 response if you were logged in. If you were logged out, you get the same response checking for a valid uuid and a non uuid so I’m not sure what an attacker is learning?

    But you can determine its existence or not through the status code.

    You get the small benefit of knowing if a file exists only if you have valid credentials. If you don’t have credentials your going to get bounced to the login screen no matter what string you try with no feedback.

    Gifs will lose any animation, pngs will lose quality. Also, as far as I can tell, there’s nothing stopping a malicious user uploading a non-image file.

    Again this is a design choice I don’t want gifs. There are filetype checks on line 350 of the app. PNG, webp, jpegs allowed only.

    One of the main design goals was to keep this light weight. That’s why I’m only displaying 10 photos before a new page is created. I am extremely happy with the performance of the image compression. The compression amount is tunable however if you want higher quality.

    The server can ingest a 8mb photo and compress it down to 100-500kb and it still looks totally fine to me. I was most amazed with this function. Plus, I like that I’m able to archive all these family moments into a really small footprint. Over 250 photos is only like 40mb.

    There are two steps to making a post: Upload and store the image and add the post to the database. There’s also similar steps to deleting a post: Removing the image upload and removing the post from the database. Are both these operations atomic?

    Yes deleting is atomic. It should leave no trace in the db and it really removes it from the file directory of the sever. Also wiped are all related comments and likes associated with the post.

    It’s not that hard for a sufficiently motivated adversary to get an account on a sufficiently large instance. You need to ensure that one user account being compromised doesn’t result in information leakage from unrelated accounts.

    My current built in security features are as follows.

    • invites only generated by the server manager

    • ability for the server manager to delete and wipe accounts.

    • ability to rotate your onion address. This cuts of all access to the service. The server operator would need to redistribute the onion address.

    • users have control of any data they have sent to the server…ie real deletion rights that really delete things.

    • any new invitee to the server has zero access to any accounts. Each user already in the instance needs to manually allow access to all their posts.


    1. You list “Activist/journalist secure communication” as a use case. Not all countries have freedom of press.

    Is that an inaccurate claim? It should provide the means to organize and communicate securely…to the extent Tor is secure, and if your using the official Tor browser, web crypto can be utilized for group and 1-1s for an additional layer of encryption. I thought it was a fine claim. It should be able to handle quite a few people messaging all at once on the PI varient.

    1. Looks like you name images based on a random uuid, so that should protect against filename attacks. But if you do have a filename you can tell whether the image has been an image or not.

    How would you ever discover a filename?

    If you did have a filename and the exact url to the image you would need to be logged in as a valid user, and the person who shared the photo would have needed to allow access to their profile.

    Even if you have the correct link, if those two conditions arnt satisfied you will not be able to view.

    Also, looks like all uploads are converted to jpg, regardless as to whether the original image was a jpg (or even an image) or not. Don’t do that.

    This was a design choice to have consistency in filetypes. What’s the downside? All browsers will support displaying a jpg.

    1. Can you point to where in code this invariant is enforced?

    Which part are you talking about? The image compression is defined as the compress and store function.

    The “API reference” in the readme goes into further specifics on how this works with flask.

    Everything except the login page, registration link will behind these two checks see (def login) where the @loginrequired logic is defined for each of the app routes.


    1. I disclaim the opposite, I don’t tout its ability against nation states in the Readme.

    1. There are two checks for someone on the server to be able to view a post. First, are you a valid user? Then did the person sharing the photo give you access to view their posts? If both are true you can see the post. Also, on upload to the server, the image get compressed and stripped of any meta data including the file name…so no they couldn’t check a file name. Each photo is given a randomly generated filename.

    Edit.

    1. There can’t be any posts without images attached. There will always be a post and an image. (unless it’s a 1-1 DM or group chat) which has its own rules for access.















  • Thanks, that’s actually the first constructive comment here. I do realize it’s a completely unconventional release format and if I want others to contribute I’ll have to reformat the way the repo is structured. I just hope you at least understand why it is the way it is. That’s was not the LLM’s choice. I specifically asked for the monolithic format because my development environment is a mobile device…it was too complicated to split the program into its expanded file directory and have to update each individual file before testing an iteration and feature I added.

    For example; to add a notification dot, I would have needed to touch the python app routing, the html, database classes, css. It was to much to keep track on for a mobile environment. The single file script allowed for a faster feedback loop because I can just swap the script in the terminal and it will overwrite and the existing directory in a snap.


  • Why? Because it’s long and complex? It would be the same exact thing just separated. What’s the difference honestly?

    Here is a overview.

    It starts with defining environment variables, app directory, file permissions for the directory.

    Then it assembles/installs or updates the dependencies.

    Then is concatenates the python app. The python app is big because it’s complex with all the game logic of three mini games.

    The python app grabs all it’s dependency packages it needs, creates the database, defines all the functions I wanted such as… What’s a like, what does a comment button do, what does a login button do, what’s a Scrabble game, what’s a chess game, what’s a read receipt… All these functions define when and where to interact with the memory of the database.

    Then the html templates are concatenated. This is shell of what is served to the client so they can interact with the database.

    Next the CSS file is born. This is just a skin to make it all look nice.

    Finally, it finishes with the CLI server manager. It provides the operator admin functions. Turn the server on, off, networking on and off, backups, invites to server, uninstall the whole app and more.


  • The backbone and internals were made by great developers…not me. That’s a good thing. Each time you run the script these packages are updated to the latest and greatest.

    What I’ve done is brought it all together and generated some harmless html, css, python app to bring it all to life.

    Things I didn’t make:

    tor - networking backbone

    clang - compiler infrastructure.

    libjpeg-turbo - server side image compression to keep it all light weight

    openssl - open library for encrypted internet communications over tor

    gnupg - encrypted backups

    flask - lightweight web framework

    sqlalchemy - the database backbone

    pillow - image processing

    itsdangerous - handling session data securely

    werkzeug - webserver gateway interface

    gunicorn - wsgi complient server for performance and support for handling the server requests efficiently.

    If any of these packages get some new security update or performance improvement, nanogram would instantly benefit and patch because it’s searching for the most up to date version of these utilities on each run.


  • I’m not saying it’s the correct or proper way to do things; it was just the eaisist way for me to keep track of everything. This entire thing was created on mobile and I found it was quicker to keep things in one copy pastable format.

    The work flow was: ponder new features, discuss ways to implement, implement and generate the monolith with the implementation, copy paste into the terminal, test to see if it’s what I wanted, tweak stuff until I’m happy, rinse and repeat. It wasn’t like this was a one liner prompt into a LLM.

    Here’s a tip:
    Writing good code, is about writing it for the next human, not for the machine.

    not to be rude but as someone who has no coding background I feel like I can read and understand what’s going on in this raw source pretty well at this point after watching each portion generating 100’s of times. Why can’t you read and understand it you are a 20y senior dev?