Following the last post, I’ll comment that I use Screen Sharing very regularly. Each time I open it in macOS 10.13, I need to start typing before it auto-completes my last session. To save a few seconds, I wrote a quick Alfred workflow that scans my recent Screen Sharing profile history (located in a user Library folder) and pattern matches as I type.
Now I can just open Alfred, type “screen + <space> + <keyword>” and it’ll bring up matching profiles and I can connect directly. Since Screen Sharing uses VNC fallback, I can also use this to quickly connect to my Raspberry Pi, the wife’s laptop, or anything else I might need.
I recently was provided a new MacBook for work (previously I’d been using my personal Mac, and we’re tightening data security). I purged the personal laptop of all corporate data, but I still prefer my personal machine and setup when working from home. So I started using Mac’s built-in Screen Sharing.app, connecting to the work laptop on home wifi and just doing all my work stuff that way. I have my personal apps open (Twitter, AirMail, iMessage, etc) alongside the work laptop screen (where I tend to have Sublime Text, Slack and Outlook running all day).
For the most part “It Just Works”™️, but occasionally after switching back and forth between desktops, command-key combos stop working on the remote machine. Things like copy/paste work, but command-Tab and command-space are caught by the host.
After some Googling, I found most people link this behavior to over-eager listening by Dock.app. So I wrote an Alfred workflow to quickly fix the issue.
If you’re into web security, you have hopefully heard of SecLists. It’s an amazing repository of keywords, indicators, payloads, passwords and more. It’s great not just for SecOps, but also developers and QA who want to step up their security game.
As part of a project I’m working on, I wanted to be able to quickly compare strings in the Discovery/Web_Content files against logs I have regularly synched to AWS S3 (specifically, ELB logs for my SaaS platform). In order to find interesting data in those logs, I’ve already created Athena tables, so I just need a new table for this content. So I wrote a quick script that fetches the SecLists repo, copies it up to S3, then generates an Athena table.
This gist shows how to make the whole repo searchable, but it’s worth noting that there are README files and other content in there you don’t want to query (including GIFs and other binaries). So it’s a good idea to restrict your queries to subfolders using the $path metavariable, or CREATE your table using that subfolder in the LOCATION path. (For example, since I’m only interested in web content, I gave that full path in my CREATE TABLE statement.)
What’s rad about this is that (a) it’s searchable using standard SQL, (b) I can compare strings to other data files using Athena, and (c) I only incur access/query charges when I run my queries, rather than having an always-on database instance.
This guy on the news is claiming that he doesn’t think the current hurricanes (Harvey, Irma, Jose) are connected in any way to global warming. Seems like a scientist would want some more data before making that kind of a claim…
I’m kind of surprised I couldn’t easily find something like this elsewhere. After all the recent news about unsecured (or very poorly secured) AWS S3 buckets, I wanted to find a quick and easy way of checking my own buckets. Between the several AWS accounts I manage, there are hundreds.
AWS sent out an email to account owners listing unsecured buckets a while back. Read more about it from A Cloud Guru, where they also discuss how to secure your buckets. But that doesn’t necessarily help with quick auditing. AWS provides some tools like Inspector to help find issues, but setting it up can take some time (though it’s totally worthwhile in the long run). I’m impatient, and I want to know stuff right now.
My solution was to write a quick script that scans my buckets for glaring issues. Namely, I want to know if any of my buckets have the READ permission set for “everyone” or “any AWS account”. If READ is allowed for “everyone” – anyone can list or download files in that bucket. If it’s allowed for “any AWS account”, a trivial barrier is set – a user just has to have an AWS account to review your bucket contents.
So here’s my script.
It requires the AWS CLI and jq, which is an awesome utility, and can be downloaded here. It’ll check top-level bucket ACLs for public-read settings, and just alert you to those bucket names. From there, I’ll leave it to you to secure your buckets.
If you just want to take the nuclear option and update your buckets to private-only, you can do that with this AWS CLI command:
This was was damn confusing, but the solution is absurdly obvious. I needed to relocate a large number of files from a single source into a collection of subfolders. (These subfolders were essentially worker queues, so I wanted a roughly even distribution every time new files appeared in the source folder.)
But I noticed that every time this executed, all my source files were ending up in the same queue folder (and not being evenly distributed). What gives?
Turns out, my call to $RANDOM was being executed only once at runtime, so that value was being set statically, and used for all subsequent mv commands. The Eureka moment what when I realized that as a subshell command, I need to escape my dollar-signs so that they’d be ignored by the parent shell, and translated by the child.
I suddenly found all my files going to the correct folders. Yet another reminder to always keep scope in mind.