We're ecstatic to announce that the tradition continues! 200ok and Ardeo are teaming up once again to host an official Swiss satellite venue for EmacsConf 2023, making it a hat-trick following our successful collaborations in 2019 and 2022. Mark your calendars for December 2 and 3, 2023, and join us at Ardeo's Coworking Hub in Lucerne, a venue that radiates community spirit, openness, and innovation.
Hosting EmacsConf is more than just providing a space; it's about fostering a community centered around the joy of GNU Emacs and Lisp. We are committed to the Free Software movement and believe that gatherings like EmacsConf are crucial for spreading the word and encouraging more people to engage in this important cause.
The spots at Ardeos' Coworking Hub for EmacsConf 2023 are limited to 25. Hence, sign up early, and release your seat just as early if you cannot attend.
To sign up, send an email to info@200ok.ch:
Subject: Register for EmacsConf satellite in Lucerne --- Days: Dec 2 and Dec 3 Number of attendees: 2
Detailed information on all talks: https://emacsconf.org/2023/talks/
We are looking forward to meeting many old and new faces. The conference is a great opportunity to network, learn, and contribute to the community.
We plan to take photos during the event for online publication. If you prefer not to be photographed, please consider this before attending. Guests are also welcome to take and share their own photos.
Admission is free ("Free as in Beer"). However, on-site donations are welcome and appreciated.
We encourage everyone to get involved in making EmacsConf 2023 a success. Whether you're interested in speaking, or just attending, there's a place for you in this community.
EmacsConf 2023 will also be an online conference, ensuring that you can be part of the experience even if you can't make it to Lucerne. The EmacsConf upstream team is fully committed to freedom and will continue to use an infrastructure and streaming setup that consists entirely of free software, just like in previous EmacsConf events.
For general discussions about EmacsConf, you can join the
emacsconf-discuss mailing list. If you're interested in the
organizational aspects, the emacsconf-org mailing list is the place to
be. Public and private emails can be sent to emacsconf-org@gnu.org
and emacsconf-org-private@gnu.org
, respectively.
To engage in real-time discussions, the #emacsconf
channel on
irc.libera.chat
is the go-to place. You can join the chat using your
favorite IRC client or by visiting chat.emacsconf.org in your web
browser.
Let's make EmacsConf 2023 a memorable event that celebrates Free Software and the vibrant community that supports it. We look forward to seeing you there!
So what Microsoft did to further the adoption of their PWA is instead of redirecting to an URL that makes the browser start the Linux client they redirect directly to their PWA.
Luckily given the meeting URL contructing the URL for the Linux client
is rather simple. You just need to replace
https://teams.microsoft.com
with msteams:
. The resulting URL can
be opened directly with Teams (or indirectly via xdg-open) by passing
it as an argument. (AFAIK there is no option in the Teams GUI to open a
meeting link.)
Here is a bash alias that does the job for you:
alias pott='f() { xdg-open ${1/https:\/\/teams.microsoft.com/msteams:} }; f'
The alias is called by passing the meeting url (in quotes, because fancy characters)
% pott "https://teams.microsoft.com/l/..."
This will open the Linux client as you were used to when clicking a link to a Teams meeting – at least for now.
Just in case you're wondering, we called it "pott" which stands for "Plain Old Terrible Teams". We fully expect you to rename the alias to your preference. Here are some ideas to get you going (it really depends on where you're standing):
If you liked this post, please consider supporting our Free and Open Source software work – you can sponsor us on Github and Patreon or star our FLOSS repositories.
]]>alephDAM is part of Forward Publishing. As a technology partner and one-stop-shop, we helped Schwäbische.de's transformation into a future-proof digital newsroom. Forward Publishing's portfolio is chosen based on the best-of-breed principle, allowing Schwäbische.de to have better content with less IT.
Livingdocs is also part of Forward Publishing. With Livingdocs, Schwäbische.de's journalists and designers are able to produce digital content efficiently and aesthetically.
If you want to learn more about how to automate the delivery of news agency content, check out alephDAM. If you're interested, schedule a demo any time!
]]>It's a small library that we have created to provide the schedule for the various recurring Zen meditation retreats of Lambda Zen Temple.
We encourage others to release their code under FLOSS terms in their own projects and to check out our event-scheduler library on GitHub. The library is available under the AGPL license, has no dependencies and tests are written in Jest.
If you liked this post, please consider supporting our Free and Open Source software work - you can sponsor us on Github or star our FLOSS repositories.
]]>However, when you have a lot going on, it may not always be possible to completely close up all the loose ends from the previous year. That's where the power of custom bookmarks in mu4e, the Emacs mail reader, comes in.
With mu4e, you can define custom bookmarks that allow you to quickly access specific groups of emails. This can be a great way to temporarily "hack" your way to an inbox zero state, even when you have a lot of older emails that you still need to process.
For example, the following elisp code creates a bookmark that displays all unread and flagged messages from the current year, without touching the older emails:
(add-to-list 'mu4e-bookmarks
'((concat " (flag:unread OR flag:flagged) AND NOT flag:trashed AND date:"
(format-time-string "%Y")))
"Unread messages, current year" ?U)
With this bookmark, you can quickly view and address the most important emails from the current year, without getting overwhelmed by the older emails that you can tackle at a later date.
In summary, while it may not always be possible to achieve a complete inbox zero state, custom bookmarks in mu4e can help you stay productive and focused by allowing you to temporarily declutter your inbox and focus on the most important tasks at hand.
If you liked this post, please consider supporting our Free and Open Source software work - you can sponsor us on Github or star our FLOSS repositories.
]]>The talks and discussions for EmacsConf are available online at https://emacsconf.org/2022/talks/. We encourage others to support the Free Software community by donating organizations such as 200ok (https://github.com/sponsors/200ok-ch), Org mode (https://liberapay.com/org-mode), the Free Software Foundation and GNU (https://my.fsf.org/donate).
We would like to extend a heartfelt thank you to all of the organizers of these events, especially Amin Bandali and Sacha Chua, for their hard work and dedication in making them such a success. We would also like to thank all of the attendees for their participation and for contributing to the wonderful atmosphere of the conferences.
Here are some impressions from EmacsConf and reClojure. It was a truly special weekend, and we are grateful to have been a part of it.
If you liked this post, please consider supporting our Free and Open Source software work – you can sponsor us on Github and Patreon or star our FLOSS repositories.
Fortunately, I recently discovered a simple yet powerful solution to this problem: an elisp function that automatically recalculates clock tables whenever an Org mode file is saved.
To use this function, simply add the following code to your init.el
file:
(defun autocalc-clocktable () (when (derived-mode-p 'org-mode) (save-excursion (goto-char 0) (if (string-equal (car (cdr (car (org-collect-keywords '("AUTOCALC_CLOCKTABLES"))))) "t") (progn (goto-char (search-forward "clocktable")) (org-clock-report))))))
Once this function is defined, you can enable it for a specific Org mode file by adding the following line to the file:
#+AUTOCALC_CLOCKTABLES: t
From then on, every time you save that file, the clock tables within it will be automatically recalculated, allowing you to see an up-to-date view of your clocked time without having to manually run the `org-clock-report` command.
I have found this function to be incredibly useful, as it saves me a lot of time and hassle when working with clock tables in Org mode. Give it a try and see if it helps improve your Org mode workflow as well!
If you liked this post, please consider supporting our Free and Open Source software work – you can sponsor us on Github and Patreon or star our FLOSS repositories.
]]>
Let's create a productivity-of-the-day
function returning the total
number of TODO statements that have either been added or removed from
all agenda files. This is a pretty good proxy for productivity - or at
least to see that there's a bit of progress throughout the day.
The code does the following:
Count the added or removed TODO statements for today using good old command-line tooling like:
git log --since=yesterday -p ~/Dropbox/org/things.org \ | grep TODO \ | grep -E "^\+|^\-" \ | wc -l
Here's the code:
(defun count-lines-with-expression (s exp) "Count the number of lines in the string S that contain the regular expression EXP." (let ((count 0)) (mapc (lambda (line) (when (string-match-p exp line) (setq count (+ 1 count)))) (split-string s "\n")) count)) (defun productivity-of-the-day () (seq-reduce (lambda (acc it) (let* ((folder (file-name-directory it)) (file (file-name-nondirectory it)) (base-cmd (concat "cd " folder "; git log --since=midnight -p " file "| grep TODO")) (changed (shell-command-to-string base-cmd)) (added (count-lines-with-expression changed "^\\+")) (removed (count-lines-with-expression changed "^\\-"))) (cons (+ (car acc) added) (- (cdr acc) removed)))) org-agenda-files '(0 . 0)))
You can then show this number in a convenient place - for example, in the Emacs modeline. Personally, I show it in the status bar (polybar) of my window manager (i3). Here's what it looks like:
This is done by calling emacsclient
in a custom module:
[module/productivity] type = custom/script exec = echo 💪 `emacsclient -a "" --eval "(productivity-of-the-day)"`
If you're new to calling Emacs functions from the command-line or other scripts, we've got you covered. Here's a blog post outlining this in detail.
Happy hacking!
P.S.: Don't conflate quantity with quality.
If you liked this post, please consider supporting our Free and Open Source software work – you can sponsor us on Github and Patreon or star our FLOSS repositories.
]]>We are also happy to announce that this year we will have a physical venue in Switzerland where we can gather as a community, watch talks, hack all things Clojure, and have a good time together.
Our hosts will be Ardeo (ardeo.ch) - the venue is their Coworking Hub in Lucerne. The space is just a couple minutes from Lucerne central station and features three rooms - one per track and a hacking/chillout space. Attending the conference is free (as in beer). Optional donations are welcome. For Ardeo, the Coworking Hub is central to the sharing economy and practising openness and transparency. As a driver for innovation, Ardeo is a long-standing partner of 200ok. Phil and Alain from 200ok are co-hosting.
The first day, Dec 2nd, will be hosted in its entirety from 10:30 until 21:30. The second day is in conflict with the EmacsConf 2022, which is also being hosted in the space on the Dec 3rd, so the official duration for the reClojure event will be scheduled as 10:30 - 14:30. After that the focus will be on the EmacsConf. Optionally, should there be demand and space we might run the rest of the reClojure talks in a separate room.
You can find details about the reClojure schedule on their website https://www.reclojure.org/
Ardeos' Coworking Hub spots for reClojure 2022 are limited to 25. Hence, sign up early, and release your seat just as early if you cannot attend. For signing up, please use this Meetup event:
Please make sure to sign up directly at the Meetup event of the organizers, as well, so that they have a proper grasp of how many people attend the online conference: https://www.meetup.com/london-clojurians/events/289598000/
We are very much looking forward to reClojure 2022 - to meeting great people, having interesting discussions and sharing all things Clojure and Lisp \(^_^)/
If you'd like to attend reClojure and EmacsConf, please sign up for both events separately. Here's more information about EmacsConf.
We are also happy to announce that this year, as in 2019, we're going to have a physical venue in Switzerland where we can gather as a community, watch talks, hack all things Emacs, and have a good time together.
Our hosts will be Ardeo (ardeo.ch) - the venue is their Coworking Hub in Lucerne. The space is just a couple minutes from Lucerne central station and features three rooms - one per track and a hacking/chillout space. Attending the conference is free (as in beer). Optional donations are welcome. For Ardeo, the Coworking Hub is central to the sharing economy and practising openness and transparency. As a driver for innovation, Ardeo is a long-standing partner of 200ok, the host of the official Swiss satellite of EmacsConf 2019. Phil and Alain from 200ok are co-hosting.
EmacsConf 2022 will be on Dec 3 (Sat) and Dec 4 (Sun), both from 3 pm-11 pm Zurich/CET.
EmacsConf 2022 will have two tracks. The General track will include talks about Emacs workflows and community resources, while the Development track will focus on technical topics. Even if you're new to Emacs and Emacs Lisp, you'll probably find lots of talks that can inspire and help you learn.
You can find more information on the schedule here: https://emacsconf.org/2022/talks/
The spots at Ardeos' Coworking Hub for EmacsConf 2022 are limited to 25. Hence, sign up early, and release your seat just as early if you cannot attend.
To sign up, send an email to emacsconf-register@gnu.org like so:
Subject: Register for EmacsConf satellite in Lucerne
---
Days: Dec 3 and Dec 4
Number of attendees: 2
We are very much looking forward to EmacsConf 2022 - to meeting great people, having interesting discussions and sharing all things Emacs and Lisp \(^_^)/
Goodies
Venue
Introduction to organice
Coffee break
]]>NB: I have used this setup for the better part of 5 years now. If the past is any indication of the future, it'll continue to work without too much manual adaptations on your part.
Here's a demo of how the final setup looks like:
I use the i3 tiling window manager. It has excellent support for window and tab management. Additionally, I'm using rofi, a window switcher and application launcher. It is similar in functionality to dmenu. Using i3 and rofi, it is trivial to start applications, order them into workspaces in different layouts and later find them, again.
In the demo screenshot above, there are multiple FF windows open, some are tabbed (by the wm). On top, I'm assuming that I have many workspaces open and I forgot where the 'hackernews' window was. I ask rofi, it fuzzy auto-completes the query. If I hit 'enter', it would take me to the workspace and bring the window to the foreground.
To set it up, follow these steps:
If you've customized your browser (basically any desktop browser)
before 2019, you might now that a browser loads a user specific style
sheet called userChrome.css
or userContent.css
when starting up.
In 2019 with v69, Firefox disabled this behavior by default. However,
being the configure browser that Firefox is, it can be enabled again:
about:config
in the address bar and
press Enter/Return. Click the button accepting the risk.toolkit.legacyUserProfileCustomizations.stylesheets
preference is not already set to true
, double-click it to switch
the value from false
to true
.chrome
userChrome.css
with the following content:#tabbrowser-tabs { visibility: collapse !important; } .private-browsing-indicator { background-image: none !important; } #tabbrowser-tabs, #navigator-toolbox, menuitem, menu { font-size: 15px !important; } #TabsToolbar { visibility: collapse !important; }
Install and enable a Firefox extension which forces new tabs to be opened in new windows, instead. I use NoTabs.
You're all set. Quit and restart Firefox to pick up the userChrome change and you're good to go.
If you liked this post, please consider supporting our Free and Open Source software work – you can sponsor us on Github and Patreon or star our FLOSS repositories.
#!/bin/sh if emacsclient -a false -e 't'; then emacsclient \ -e '(progn (find-file "$1") (org-html-export-to-html))' else emacs --batch \ -l ~/.emacs.d/init.el \ -f org-html-export-to-html \ --kill "$1" fi
At 200ok we use Org mode a lot. And by "a lot", I mean basically for everything from writing (documentation, concepts, quotes, and blog posts like this one), to organizing work (budgeting, capacity planning, time tracking, ticketing, meeting minutes), even our ledgers are generated in org. Some of our processes are highly automated, and we built quite some tooling to facilitate automation. Most of it is not fit for open-sourcing, but you might have come across organice (a FLOSS implementation of Org mode without the dependency of Emacs, built for mobile and desktop browsers) or ukko (a versatile static site generator), which uses the pattern described in this article to render HTML pages from org sources.
Naturally, when sharing documents, like meeting minutes, quotes, and
invoices with our business partners, using org internally boils down
to exporting to HTML or PDF. From within Emacs' org-mode, this is done
with C-c C-e h h
or C-c C-e l p
, respectively. When we want to
automate this, we have to be able to trigger the export from the
command line. Using HTML as an example, this can be done with the
following command:
emacs --batch \ --load ~/.emacs.d/init.el \ --funcall org-html-export-to-html \ --kill "${ORG_FILE}"
Let's break the command down. --batch
will run Emacs in batch mode;
this means we have to provide the file to load with --load
, as well
as the function to call with --funcall
. We will load the user's
standard Emacs configuration to ensure we have the same setup as if we
exported interactively. In theory, the file loaded could be limited to
the required packages and configuration to save loading time, but this
optimization pales in comparison to the time saver I will soon lay
out. --kill
will ensure that Emacs exits after the call to our
function has returned.
This command takes some time, as it not only will have to start Emacs
but also load the users config, which might load a lot of packages.
The actual export is rather quick, even for larger org files. The
noticeable wait stems from starting Emacs and loading packages.
Therefore Emacs users who close and open Emacs regularly usually have
Emacs daemon in place. An instance of Emacs that runs as a daemon,
also known as Emacs server. Starting an Emacs daemon is as easy as
running emacs --daemon
. With Emacs running as daemon emacsclient
is used to connect. Emacsclient doesn't provide the same "batch"
functionality. But we can still use it to export org files through
Emacs daemon. We'll simply need to pass it some elisp code to do the
same.
emacsclient \ --eval "(progn (find-file \"${ORG_FILE}\") (org-html-export-to-html))"
With --eval
we can pass elisp code to the running instance of Emacs
daemon. progn
is an elisp special form that allows us to put
multiple function calls in a sequence. find-file
will find and, more
importantly, load the given file into a buffer. Finally
org-html-export-to-html
will export the file to HTML. This is much
faster as it happens in the daemon, an already running instance of
Emacs.
Now you'll want to ensure that your automation always works (because what good are automations if they fail). If you run the command on your machine, it might be safe to assume you have an Emacs daemon running. But there are plenty of good reasons or environments which don't have an Emacs daemon running. Think of continuous integration systems or other developer's systems. This means we should check if an Emacs daemon is available and resort to just using Emacs otherwise. Fortunately, checking is easy:
emacsclient --alternate-editor false --eval 't'
This will run emacsclient with the option --alternate-editor
which
allows us to define an executable that should be run unless an Emacs
daemon is found to connect to. We specify false
as alternate editor
because in order to check if Emacs daemon is running, we don't want
another editor to open, instead we want the command to fail, i.e.
return a non-zero exit code, which false
does. --eval 't'
will
just evaluate to true, which essential is a noop, which will make
emacsclient return with a zero exit code, meaning success. This can
conveniently be used as a condition to if
. Hence the final script
will look like this:
#!/bin/sh if emacsclient --alternate-editor false --eval 't'; then emacsclient \ --eval "(progn (find-file \"$1\") (org-html-export-to-html))" else emacs --batch \ --load ~/.emacs.d/init.el \ --funcall org-html-export-to-html \ --kill "$1" fi
So far so good. This solution follows the principle of ceteris paribus. This means the file will be exported and everything else is unchanged, including the fact that Emacs daemon was either running or not. If you are willing to sacrifice the principle of ceteris paribus the solution can even be reduced to:
#!/bin/sh emacsclient --alternate-editor "" \ --eval "(progn (find-file \"$1\") (org-html-export-to-html))"
From the manpage of emacs client:
-a, –alternate-editor=EDITOR
if the Emacs server is not running, run the specified editor instead. This can also be specified via the 'ALTERNATE\EDITOR' environment variable. If the value of EDITOR is the empty string, then Emacs is started in daemon mode and emacsclient will try to connect to it.
This will start Emacs daemon if it is not running, which not only makes the check for a running Emacs daemon unnescessary it will also automatically speed up consecutive exports.
Additionally, this is so short, instead of a shell script this could be an alias when wrapped in an immediate function.
alias org2html='f() { emacsclient -a "" -e "(progn (find-file \"$1\") (org-html-export-to-html))" };f'
As a bonus for the reader who made it to this point. Here is the snippet of how we use the pattern above in our Makefiles. "Wait, what?" you say. Why Makefiles? Much like org, we are using GNU make a lot. And by "a lot", I mean basically for everything – you see where this is going. I cannot stress enough how useful GNU make is! It's exactly like the late Joe Armstrong said:
Make is the only build tool you'll ever need to learn. […] I cannot understand why people use specialized build tools for their different languages. […] Once you've learned how make works, you use it for everything, and then you don't need to learn a new build tool when you change your programming language. It's very easy to learn as well.
(Word of warning when copying this code: GNU make is very picky about tabs and space, and those are easy to mess up when copying and pasting code from a website.)
F?=default.org .PHONY: export export: ## Exports a given org file F to HTML if emacsclient -a false -e 't'; then \ emacsclient -e '(progn (find-file "$(F)") (org-html-export-to-html))'; \ else \ emacs --batch -l ~/.emacs.d/init.el $(F) -f org-html-export-to-html --kill; \ fi .PHONY: watch watch: ## watches subdirectories and exports and syncs on change filewatcher -l *.org 'make export F=$$FILENAME; make sync'
Having this in your Makefile will export default.org
to a HTML file
with make export
. This can be applied to any other org file by
providing its path as F to make.
make export F=path/to/your.org
Additionally, the make target watch
uses Filewatcher CLI to observe
changes to org files in any subdirectories and run make export
and
make sync
for a fully automated deployment process on save.
If you liked this post, please consider supporting our Free and Open Source software work – you can sponsor us on Github and Patreon or star our FLOSS repositories.
]]>The following Elisp code exports your Org mode agenda files to an iCalendar file. iCalendar (or ICS) is a standard for exchanging calendaring information. If you host this file on a web server, it can be consumed by any calendar application that supports iCalendar. Here are some guides for different calendar app providers:
(setq org-directory "~/Dropbox/org/") (defun set-org-agenda-files () "Set different org-files to be used in `org-agenda`." (setq org-agenda-files (list (concat org-directory "things.org") (concat org-directory "reference.org") (concat org-directory "media.org") (concat org-directory "shared_with/bob.org") "~/src/your_company/admin/things.org" "~/src/your_customer/admin/pm.org"))) ;; Setting variables for the ics file path (setq org-agenda-private-local-path "/tmp/dummy.ics") (setq org-agenda-private-remote-path "/sshx:user@host:path/dummy.ics") ;; Define a custom command to save the org agenda to a file (setq org-agenda-custom-commands `(("X" agenda "" nil ,(list org-agenda-private-local-path)))) (defun org-agenda-export-to-ics () (set-org-agenda-files) ;; Run all custom agenda commands that have a file argument. (org-batch-store-agenda-views) ;; Org mode correctly exports TODO keywords as VTODO events in ICS. ;; However, some proprietary calendars do not really work with ;; standards (looking at you Google), so VTODO is ignored and only ;; VEVENT is read. (with-current-buffer (find-file-noselect org-agenda-private-local-path) (goto-char (point-min)) (while (re-search-forward "VTODO" nil t) (replace-match "VEVENT")) (save-buffer)) ;; Copy the ICS file to a remote server (Tramp paths work). (copy-file org-agenda-private-local-path org-agenda-private-remote-path t))
You could run the function org-agenda-export-to-ics
as a hook
whenever you change an agenda file. Since I'm editing my Org files not
just with Emacs, but also with organice, I'm doing this on a regular
basis in a cron job which runs this code:
#!/bin/bash emacs -batch -l ~/.emacs.d/init.el -eval "(org-agenda-export-to-ics)" -kill if [[ "$?" != 0 ]]; then notify-send -u critical "exporting org agenda failed" fi
The hourly cron job looks like this:
0 * * * * /home/munen/bin/export-org-agenda.sh
If you liked this post, please consider supporting our Free and Open Source software work – you can sponsor us on Github and Patreon or star our FLOSS repositories.
]]>xdg-open
know which application
is the right one and how do you configure the one you want to use
instead?🤓
To get started, try opening your file like so: xdg-open your_path
.
This is just to confirm what application is opened right now. To
configure a different application, first, we have to know about the
mimetype of your_path
. In this example, I'm asking for the mimetype
of an image:
$ mimetype foo.jpg foo.jpg: image/jpeg
So, the mimetype is image/jpeg
. Now we can ask how xdg comes up with
the responsible application:
$ XDG_UTILS_DEBUG_LEVEL=3 xdg-mime query default image/jpeg Checking /home/munen/.config/mimeapps.list Checking /home/munen/.local/share/applications/mimeapps.list imv.desktop
We are increasing the debug level with XDG_UTILS_DEBUG_LEVEL=3
to
list the responsible application, and the paths to the
config files.
Finally, we can set a new default application. Suppose you want to open
jpegs with imv in the future. You will add the responsible desktop
entry file. My distribution (Debian) usually ships with a desktop
entry file for every package. The Debian package for imv does include
a imv.desktop
file. Here's how to configure imv as the default for
image/jpeg
files, then:
$ xdg-mime default imv.desktop image/jpeg
This will add the following line to one of your mimeapps.list
config
files (i.e. .config/mimeapps.list
):
image/jpeg=imv.desktop
If an application doesn't have a desktop entry file, or you want to
create a custom configuration for your needs, just create a new file
in ~/.local/share/applications
.
Here is an example of creating a desktop entry file for Emacs, but
instead of running Emacs (which would be the default), it uses
emacsclient
. I'm using this config for all kinds of mimetypes
(directories, zip files, etc).
[Desktop Entry] Version=1.0 Name=GNU Emacs (GUI) GenericName=Text Editor Comment=GNU Emacs is an extensible, customizable text editor - and more MimeType=text/english;text/plain;text/x-makefile;text/x-c++hdr;text/x-c++src;text/x-chdr;text/x-csrc;text/x-java;text/x-moc;text/x-pascal;text/x-tcl;text/x-tex;application/x-shellscript;text/x-c;text/x-c++; Exec=/usr/bin/emacsclient -c %F Icon=emacs25 Type=Application Terminal=false Categories=Utility;Development;TextEditor; StartupWMClass=Emacs Keywords=Text;Editor;
If you liked this post, please consider supporting our Free and Open Source software work – you can sponsor us on Github and Patreon or star our FLOSS repositories.
git pull && git push
and that the migration will be done quickly. Depending on your usage of Gitlab, this is either a naïve oversimplification or straight forward dangerous. The truth is more nuanced and entails quite a bit more work. Since the migration took us a couple of full working days and we wrote some reusable checklists and code in the process, we have quickly jotted these down. Maybe somebody else can also make use of it.
When moving off a self-managed instance, there's a couple high level planning questions to consider:
[ ] Are there multiple entities using Gitlab?
[ ] How to coordinate the switch between different teams and people?
[ ] Inform your teams with enough time to plan ahead.
After these, there's high level technical tasks:
[ ] Chose a new Forge.
[ ] On gitlab.com, create new groups and user accounts.
[ ] Invite your teams to the new groups.
Now, to the actual migration tasks. Gitlab has the ability to export and import projects. This is possible to do in the Web Application. However, depending on the number of projects, this will be quite tedious and error prone. We opted to make use of various Gitlab CLI projects, but that didn't pan out. Having said that, using the Gitlab API directly is well documented and straight forward.
Now, on a high level, we'll do the following:
id
and name
.Here are the details:
curl "https://gitlab.200ok.ch/api/v4/projects?private_token=$GITLAB_TOKEN&per_page=100" \
| jq '.[] | { id: .id, from: .path_with_namespace, to: ""}'
This will yield a structure like:
[
{ "id": 1, "from": "200ok/project-name", "to": "" }
]
For automating tasks 2-5, we have written this Ruby script:
require 'json'
projects = [
# This project will keep it's namespace and project name when
# imported.
{ "id": 1, "from": '200ok/project-name', "to": '' },
# This project will only be downloaded for archiving, but not
# imported to gitlab.com
{ "id": 2, "from": '200ok/project-name', "to": nil },
# This project will be imported to a different namespace and project
# name.
{ "id": 3, "from": '200ok/project-name', "to": "ns2/project-name-2" },
]
# prepare projects
projects.each_with_index do |project, index|
to = project[:to]
to = project[:from] if to and to.empty?
project[:to] = to
projects[index] = project
end
BASE_CMD = 'curl -s --header "PRIVATE-TOKEN: %s" '
EXPORT_CMD = BASE_CMD + '--request POST "https://gitlab.200ok.ch/api/v4/projects/%s/export"'
STATUS_CMD = BASE_CMD + '"https://gitlab.200ok.ch/api/v4/projects/%s/export"'
DOWNLOAD_CMD = BASE_CMD + ' --remote-header-name --remote-name "https://gitlab.200ok.ch/api/v4/projects/%s/export/download"'
IMPORT_CMD = BASE_CMD + '--request POST --form "namespace=%s" --form "path=%s" --form "file=@%s" "https://gitlab.com/api/v4/projects/import"'
tokens = {
old-gitlab-admin-token: 'token1',
new-gitlab-user-1: 'token2',
new-gitlab-user-2: 'token3'
}
# schedule exports from gitlab.200ok.ch
projects.each do |project|
puts "Requesting export for #{project['from']}..."
cmd = EXPORT_CMD % [tokens[:old-gitlab-admin-token], project[:id]]
system(cmd)
sleep 0.25
end
# loop to find finished exports and import to gitlab.com
remaining = [true]
while remaining.count
projects.each_with_index do |project, index|
file = project[:from].tr('/', '_') + '.tar.gz'
projects[index][:done] = done = File.exists?(file)
next if done
print "Checking #{project[:from]}..."
cmd = STATUS_CMD % [tokens[:old-gitlab-admin-token], project[:id]]
result = JSON.parse(%x[#{cmd}])
puts status = result['export_status']
if status == 'finished'
puts "Downloading #{project[:from]}..."
cmd = DOWNLOAD_CMD % [tokens[:old-gitlab-admin-token], project[:id]]
system(cmd)
system("mv *_export.tar.gz #{file}")
if to = project[:to]
token = to.start_with?('username1') ? tokens[:new-gitlab-user-2] : tokens[:new-gitlab-user-1]
puts "Uploading #{to}..."
ns, path = to.split('/')
cmd = IMPORT_CMD % [token, ns, path, file]
system(cmd)
end
end
# do not inundate gitlab
# 5 request per minute per user
sleep 15
end
remaining = projects.select { |project| !project[:done] }
puts "Remaining #{remaining.count}/#{projects.count}"
end
puts "All done."
Now your projects are on gitlab.com - you're done, right? Not quite:
All Merge requests, comments, assignments, etc will belong to the user whom the API key belongs to. Gitlab has no way of knowing which users to map these things to.
[ ] Webhooks are not included in the export/import process. If you're using Webhooks, you will have to reconfigure those for each project. You can probably use this API for it, but we did it by hand, because we took the chance to rewire some notifications.
[ ] CI/CD Variables are not included in the export/import process. If you're using CI/CD Variables, you will have to reconfigure those for each project. We did this by hand, but we wrote a script to make it more visible where CI/CD Variables are used:
require 'json'
projects = [
{ "id": 1, "from": '200ok/project-name', "to": '' }
]
projects.each do |project|
variables =
`curl --silent --header "Private-Token: #{ENV['GITLAB_API_TOKEN']}" "https://gitlab.200ok.ch/api/v4/projects/#{project[
:id
]}/variables"`
variables = JSON.parse(variables)
unless variables.empty?
puts "* #{project[:from]}"
puts "#+begin_src json"
puts JSON.pretty_generate(variables)
# puts `echo '#{variables.to_json}' | jq '.'`
puts "#+end_src json"
puts
end
end
This will yield an Org mode document like:
* TODO 200ok/200ok.ch
#+begin_src json
[
{
"variable_type": "env_var",
"key": "FTP_HOST",
"value": "your-ftp-host",
"protected": false,
"masked": false,
"environment_scope": "*"
}
]
#+end_src json
Then, make the adjustments in the relevant Gitlab projects.
[ ] Migrate Gitlab container registry.
[ ] If you've used Bot users doing things on commit/push, you'll need to migrate those, their Docker images and config.
Now, you're done with the migration of projects from your self-managed Gitlab instance to gitlab.com. However, the work is not done, yet. You'll need to:
Since it's time critical and error prone to make these adjustments by hand on many projects in a diverse team, we've written a script to automate that process, as well:
STDOUT.sync = true
require 'colorize'
require 'yaml'
require 'securerandom'
system 'stty cbreak'
base = ARGV.first
custom_file = File.expand_path('.fix_git_config.yml', ENV['HOME'])
custom = File.exist?(custom_file) ? YAML.load(File.read(custom_file)) : []
mapping = YAML.load(DATA.read).concat(custom)
repos = Dir.glob('**/.git/config', base: base)
repos.each_with_index do |config, index|
config = File.expand_path(config, base)
repo = config.sub('/.git/config', '')
puts ('-' * 60).colorize(:yellow)
puts "Repo #{index+1}/#{repos.count}: #{repo}".colorize(:yellow)
ini = File.read(config)
replacements = {}
ini_viz = mapping.reduce(ini) do |r, h|
uuid = SecureRandom.uuid
replacements[uuid] = h.keys.first.colorize(:red)
r.gsub(h.keys.first, uuid + h.values.first.colorize(:green))
end
ini_viz = replacements.reduce(ini_viz) { |r, kv| r.gsub(*kv) }
if ini != ini_viz
puts
puts ini_viz
puts
puts 'Type y to apply, n to skip, anything else to abort.'.colorize(:yellow)
q = $stdin.sysread 1
if q == 'y'
ini = mapping.reduce(ini) { |r, h| r.gsub(h.keys.first, h.values.first) }
File.open(config, 'w') { |f| f.write(ini) }
puts
puts "Updated #{config}"
elsif q == 'n'
puts
puts 'Skipped.'
else
puts
puts 'Abort.'
system 'stty cooked'
exit 0
end
else
puts 'No changes required.'
end
end
system 'stty cooked'
__END__
- gitlab.200ok.ch: gitlab.com
- 200ok/old-project-name: ns2/project-name-2
At the end of the script, there's a mapping between old namespaced project names and new ones. So, if you did this kind of cleanup with the first script, you can do it here, too.
The usage of this script is: ./fix_git_origin.rb path
. Running it looks like this:
Now, you're all set!
It took us close to 4 working days worth of work to migrate 75 projects in different teams. With this writeup, we hope you will get it done faster!
If you liked this post and want to say 'thanks', please head over to our free/libre and open source software page - and if you like one of them, give it a star on Github or Gitlab.
]]>If you are working with complex nested JSON structures, you are probably familiar with jq which is like sed for JSON data and great at what it does. However, being a command-line tool like sed, the feedback for writing queries and seeing their results is a discrete process and not live.
When working with Emacs, we are used to good auto-completion and live feedback. Formerly, this was mostly done with static input, but with modern completion frameworks like Ivy and Counsel, this can be done with dynamic inputs, as well.
counsel-jq
is a package
with which you can quickly test queries and traverse a complex JSON
structure whilst having live feedback. Just call M-x counsel-jq in a
buffer containing JSON, then start writing your jq query string and
see the output appear live in the message area. Whenever you're happy,
hit RET and the results will be displayed to you in the buffer
jq-json.
In this <10m lightning talk, I'll give a quick overview on how to use
counsel-jq
and how to build similar completion functionality:
The talk was generally well received. Some relevant (and really nice!) remarks were:
The amount of time this will save is 'scary'(zeroed, IRC freenode, #emacsconf)
This is going to be a life saver <3(bhavin, IRC freenode, #emacsconf)
Code repository to counsel-jq
: https://github.com/200ok-ch/counsel-jq
Slides for the talk: https://github.com/200ok-ch/talks#200ok-talks.
Finally, there were lots and lots of great talks at EmacsConf 2020. The organizers are working hard in post to get these talks processed and online. Already now, you can see the schedule and some resources online. The videos will follow soon. Here's more information on all things EmacsConf 2020: https://emacsconf.org/2020/
If you are interested in counsel-jq, you might be an Emacs user. If you are an Emacs user, you might also be into Org mode. If you're into Org mode, you might be interested to use it 'on the go' on your phone, share Org files with a non-Emacs user or have access to your files from any web browser in the world. If so, we have you covered: We are building a free and open source implementation of Org mode without the dependency of Emacs - built for mobile and desktop browsers: https://github.com/200ok-ch/organice
]]>This is not to say that as a professional web developer or business owner, you should not employ cookie popups. Whatever is the law, is the law. This video is just for end-users who want to read websites and don't want to get tracked in the process (which, as I understand it, is the tl;dr of the relevant EU legislation).
If you liked this post and want to say 'thanks', please head over to our free/libre and open source software page - and if you like one of them, give it a star on Github or Gitlab.
]]>The file in the video includes just one "long" line of code which is 5461 characters long. If it were substantially longer (for example, for a minified JSON file), then Emacs could lock up completely. The semi-arbitrary length of the line of this demo file has been handcrafted to show the issue, but not cause a complete lock-up.
If you're curious why you'd ever need to profile something in Emacs, maybe because you're not a developer, let me say it like this: Profiling Emacs can help you even if you're not a (Lisp) programmer. Sometimes, something might just be slow - for example when you try to edit files that includes long lines in Emacs. Then, profiling comes in quite handy. For this particular issue, we also have a guide on handling long lines in Emacs.
Update (2020-10-10): In post, one piece of information got lost due to an unfortunate cut. After M-x profiler-start RET
, Emacs asks which mode to start the profiler in. The options are cpu
, cpu+mem
and mem
. In the demo, I chose cpu
, because I already knew that the issue was CPU bound (previously, I saw 100% load in top
for Emacs whilst the memory footprint was stable).
If you're into Emacs, chances are that you're also into Org mode. We're building a free and open source implementation of Org mode without the dependency of Emacs - built for mobile and desktop browsers: https://github.com/200ok-ch/organice/
If you liked this post and want to say 'thanks', please head over to the repository and give it a star. If you're not into Org mode, we've got quite a few other free/libre and open source software products that you might enjoy, instead.
]]>This issue is not new and due to the fact that Emacs scans every line of text multiple times for layout concerns. For example, it will check what the highest glyph in the line is and it will check for each paragraph if it should be rendered from left-to-right or from right-to-left. This enables some cool functionality like mixing Arabic languages with non-Arabic languages. However, it comes at a cost.
Having said that, do not worry - Emacs has you covered. Here are your options on how to handle files with long lines anyway:
Bidirectional Editing
Emacs supports bidirectional editing which means that scripts, such as Arabic, Farsi, and Hebrew, whose natural ordering of horizontal text for display is from right to left. However, digits and Latin text embedded in these scripts are still displayed left to right.
Whilst this is a great feature, it adds to the amount of line scans that Emacs has to do for rendering text. Too many line scans will cause Emacs to hang. If you normally do not work with right-to-left or left-to-right languages, then you can default to displaying all paragraphs in your preferred manner. For example, to enable left-to-right as a default, this is the configuration:
(setq-default bidi-paragraph-direction 'left-to-right)
There's another feature related to bidirectional editing: The Bidirectional Parentheses Algorithm. Disabling the BPA makes redisplay faster, but might produce incorrect display reordering of bidirectional text with embedded parentheses and other bracket characters whose 'paired-bracket' Unicode property is non-nil. Again, if you're usually not working with files that include left-to-right and right-to-left languages at the same time, disabling this gains some performance:
(if (version<= "27.1" emacs-version) (setq bidi-inhibit-bpa t))
This will already bring a bit of performance due to fewer line scans, but solves only part of the problem, because Emacs will still perform multiple line scans for other reasons.
Opening files with very long lines
When you open a file with very long lines, you have multiple options on how to handle the situation:
find-file
(Usually bound to C-x C-f
), you can use
find-file-literally
. This will visit the opened file with no
conversion of any kind and been available at or before Emacs
version 20.1.font-lock-mode
. If
the load is still too high, here's a list of other minor modes that
are known to have this effect (the list is taken from the code in
so-long.el):
so-long mode
. It will revert
into a very rudimentary major mode and disable all potentially slow
minor modes. so-long
has been introduced in Emacs 27; if you're
on an earlier version, you can install it through GNU Elpa. When
done, you can revert into your old mode setup with
so-long-revert
.Last but not least: You can configure Emacs to be smart about opening files with long lines. If Emacs thinks that a file you're opening could trigger performance problems, it will automatically toggle automated performance mitigations.
When the lines in a file are so long that performance could suffer to an unacceptable degree, we say "so long" to the slow modes and options enabled in that buffer, and invoke something much more basic in their place.
(if (version<= "27.1" emacs-version) (global-so-long-mode 1))
That's it, now you're set up to view and edit any kind of file shape or form that the world might throw at you(;
These flags and more configuration can be seen in my Emacs configuration repository on Github which is documented in a literate programming.
Enjoy and happy text processing!
Update "Introduction to profiling in Emacs" blog post and video.
: If you're interested in how to actually measure the time Emacs takes to do various tasks (like rendering text), we have added aIf you're into Emacs, chances are that you're also into Org mode. We're building a free and open source implementation of Org mode without the dependency of Emacs - built for mobile and desktop browsers: https://github.com/200ok-ch/organice/
If you liked this post, please consider supporting our Free and Open Source software work – you can sponsor us on Github and Patreon or star our FLOSS repositories.
]]>In today's release, we added support for automatic phone numbers recognition. Wherever organice finds a supported phone number format, it will render it as a clickable link. The user can just click the link and start a phone call.
Here's a demo showing the ability to start a call directly from organice:
organice already recognized various types of hyperlinks automatically -- even some that Emacs Org mode would not. That makes sense, because mobile devices (and browsers) enable a different feature set. Here's a screenshot of what kinds of implicit links organice already supports:
If you haven't checked out organice, yet, then now is a good time to do so: The development happens on Github at https://github.com/200ok-ch/organice and we are hosting a free instance at https://organice.200ok.ch/. If you like the product, please give it a star on Github.
There's also a community chat: #organice on IRC Freenode, or #organice:matrix.org on Matrix. Feel free to come and talk to us anytime. We are looking forward to meeting you.
]]>This blog post has an accompanying screencast:
I have a ThinkPad X1 Extreme Gen 2 notebook for work. The reason I bought this machine is that it has lots of power which I can make good use of when developing software. Being a higher end notebook, it did come with a Nvidia GeForce GTX 1650 GPU which is a pretty strong beast.
Having a strong GPU is not only helpful for playing games or crunching ML models, it can also be used instead of the CPU when recording a screencast or editing and encoding video. In my personal experience, my notebook likes to mimic the sound characteristics of a jet engine when doing any of these tasks. The reason is, of course, that modern CPUs have many cores, they get hot quickly, and the super slim body of higher end notebooks isn't that suitable for transporting hot air out. Using a GPU will help with that - and it will make encoding and transcoding tasks faster.
The relevant feature is called Nvidia NVENC. This feature performs video encoding, offloading this compute-intensive task from the CPU to the GPU. The encoder is supported in many streaming and recording programs, such as Open Broadcaster Software (OBS), Kdenlive and ffmpeg.
NVENC is a proprietary encoder, so it's not built-in to any of the base-installations in Debian. However, it's not that much work to get it going.
This assumes that you already have the proprietary Nvidia graphics driver running for your graphical session (X11 or Wayland). If you don't have that, yet, there are many tutorials online.
sudo apt update
sudo apt install nvidia-cuda-toolkit libnvidia-encode1
cd ~/src
git clone https://git.videolan.org/git/ffmpeg/nv-codec-headers.git
cd nv-codec-headers
make
sudo make install
Add a deb-src
entry to /etc/apt/sources.lst
for your primary packages.
Then install ffmpeg
source dependencies:
sudo apt update
mkdir -p ~/src/ffmpeg
cd ~/src/ffmpeg
apt source ffmpeg
sudo apt build-dep ffmpeg
Edit debian/rules
to include these CONFIG
flags:
--enable-cuda-nvcc \
--enable-cuvid \
--enable-nvenc \
--enable-nonfree \
--enable-libnpp \
--extra-cflags=-I/usr/local/cuda/include \
--extra-ldflags=-L/usr/local/cuda/lib64
Build ffmpeg
and dependencies .deb
packages:
dpkg-buildpackage -rfakeroot -b -uc -us
Install ffmpeg
and dependencies:
sudo dpkg -i ../*deb
f=mp4 vcodec=nvenc_h264 acodec=aac ab=384k g=15 profile:v=high global_quality=21 -coder 1 vq=21 -r 29.97 preset=slow bf=2 movflags=faststart pix_fmt=yuv420p
If you liked this post and want to say 'thanks', please head over to our free/libre and open source software page - and if you like one of them, give it a star on Github or Gitlab.
]]>
Emacs has built-in functionality for checking and correcting spelling
called ispell.el
. On top of that, there's a built-in minor mode for
on-the-fly spell checking called flyspell-mode
.
Flyspell can use multiple back-ends (for example ispell, aspell or hunspell).
Hunspell is a free spell checker and used by LibreOffice, Firefox and
Chromium. It allows to set multiple dictionaries - even different
dictionaries per language (aspell
, for example also allows multiple
dictionaries, but only for the same language).
To use hunspell
, install it first:
apt install hunspell \ hunspell-de-de \ hunspell-en-gb \ hunspell-en-us \ hunspell-de-ch-frami
Then, configure it in your Emacs config with:
(with-eval-after-load "ispell" ;; Configure `LANG`, otherwise ispell.el cannot find a 'default ;; dictionary' even though multiple dictionaries will be configured ;; in next line. (setenv "LANG" "en_US.UTF-8") (setq ispell-program-name "hunspell") ;; Configure German, Swiss German, and two variants of English. (setq ispell-dictionary "de_DE,de_CH,en_GB,en_US") ;; ispell-set-spellchecker-params has to be called ;; before ispell-hunspell-add-multi-dic will work (ispell-set-spellchecker-params) (ispell-hunspell-add-multi-dic "de_DE,de_CH,en_GB,en_US") ;; For saving words to the personal dictionary, don't infer it from ;; the locale, otherwise it would save to ~/.hunspell_de_DE. (setq ispell-personal-dictionary "~/.hunspell_personal")) ;; The personal dictionary file has to exist, otherwise hunspell will ;; silently not use it. (unless (file-exists-p ispell-personal-dictionary) (write-region "" nil ispell-personal-dictionary nil 0))
That's it. Now you've got the full power of ispell and Flyspell set up - even with multiple dictionaries! And you can benefit from that in any major mode. So, whether you're writing Emails, doing project management, writing code, you can be safe that you're protected from typos.
Here's the gist to get going:
With M-x flyspell-mode
, you'll enable Flyspell mode which
highlights all misspelled words. With M-$
, you'll check and correct
spelling of the word at point. With M-x ispell-buffer
, you'll check
and correct spelling in the buffer. See the docs for all available
functions and keyboard shortcuts.
If you'd like to see this configuration in action, here is it on Github. All of the config is written and documented in literate programming style.
If you're into Emacs, chances are that you're also into Org mode. We're building a free and open source implementation of Org mode without the dependency of Emacs - built for mobile and desktop browsers: https://github.com/200ok-ch/organice/
Enjoy and happy text processing!
If you liked this post, please consider supporting our Free and Open Source software work – you can sponsor us on Github and Patreon or star our FLOSS repositories.
Update write-region
instead of calling out to
touch
thanks to u/clemera.
When combined with other powerful features of Emacs (such as Org mode for organizing mails into projects and todos), processing mails within Emacs not only makes a lot of sense, but becomes a powerhouse.
Now, some people refrain from using Emacs (or similarly good mail user agents), because they are afraid that such a setup will not deal well with HTML emails. Not to worry, though. Emacs and mu4e are well up to the task! Here's a short screencast with demos:
Links
If you liked this post and want to say 'thanks', please head over to our free/libre and open source software page - and if you like one of them, give it a star on Github or Gitlab.
]]>C-c /
constructs a
sparse tree for selected information in an outline tree, so that the
entire document is folded as much as possible, but the selected
information is made visible along with the headline structure above
it. These are the options how you can filter your Org file down:
For example, if you had this Org file containing lots of different
information and you'd want to drill down on your headers on 'cute
dogs' which have the appropriate tags dog
and cute
, you can filter
down on :dog:cute
which will make your document look like this:
When modifying the document now, it is important to remember that it
is a sparse tree! Hence, if you delete the line on point, you might
have deleted other lines which were folded down. This might be what
you want, but it might also not be. If you filtered down a list of
headers on the same nesting level, it's better to make semantic
changes like toggling the todo state or adding a tag. Only later, when
you're viewing the whole file again (for example by using S-TAB
),
you can start deleting lines.
Now, if you're keen on having this kind of power on the go (aka on your smartphone or browser), check out the free and open source project organice (https://github.com/200ok-ch/organice/). In organice, you can manage the same Org files as in Emacs, but even when away from your computer. Looking up the same information in organice looks like this:
In fact, organice has a search feature which is similar to Emacs sparse trees, but it is even more powerful, because you can compose different kinds of searches into one:
You can simply search for TODO check out organice|orgmode
. to filter
for tasks containing these words. The pipe symbol (|) is a logical
OR. The filter is a smart-case search:
The following example searches for headlines containing START
or
FINISHED
keywords and the string "states are". You can also use
single-quotes.
START|FINISHED "states are"
The next example excludes DONE
headlines but requires the tag fun
.
-DONE :fun
You can exclude text strings, tags, and properties as well by prepending the minus sign (-).
Last but not least, you can search for headlines with defined properties:
TODO :blocked_by: :assignee:nobody|none
This filters headlines having a property blocked_by
(with any value)
and a property assignee
with a value containing nobody
or none
.
Enjoy the productivity boost of drilling down into your Org files from your computer and smartphone! Happy hacking^^
If you liked this post, please consider supporting our Free and Open Source software work – you can sponsor us on Github and Patreon or star our FLOSS repositories.
]]>There's one big exception tough: If you make remote changes whilst your machine is in suspend or hibernate, Dropbox will not sync after waking up. According to some quick searches on the Internet, this issue has been around for quite a while (aka years).
Dropbox will only synchronize after the first locally changed file. If you're often working on the same files (for example for project management or time tracking), this can easily - and often - lead to conflicting file versions and hence potentially lost data and work.
However, not all is lost. On Linux, there's usually a way(; This is the workaround I'm using: Trigger a file change after waking up from suspend/resume. Here's an example on how to do that using systemd facilities:
Create a new file /etc/systemd/system/trigger-dropbox.service
with this content:
[Unit]
Description = trigger dropbox
After = suspend.target hibernate.target
[Service]
User = munen
ExecStart = touch /home/your_username/Dropbox/wakeup_call
Type = forking
PermissionsStartOnly=true
TimeoutSec = 0
PermissionStartOnly = true
[Install]
WantedBy = suspend.target hibernate.target
Now, enable this service: sudo systemctl enable trigger-dropbox
.
You're all set. Dropbox will synchronize after suspending or resuming!
If you liked this post, check out our Free and Open Source software project and give your favorite one a star on Github or Gitlab(;
]]>We from 200ok will be there as well - as are our friends from the Insopor Zen Academy. Join us for three great days of hacking, fun and learning!
Last time we built and released our open source Crowdfunding and and Equity funding platform Swiss Crowdfunder which was immediately successfully used to raise over a quarter million Swiss franks for the Data Center Light. The event itself was really successful and we got a great article in the newspaper Südostschweiz about 200ok and ungleich.
This year, we will be hacking on one of our FLOSS projects organice. It is an implementation of Org mode without the dependency of Emacs. It is built for mobile and desktop browsers and syncs with Dropbox, Google Drive and WebDAV.
Join the event now on the dedicated page or on Meetup.
Where does Hack4Glarus take place?
It will happen at Spinnerei Linthal, a very cool old factory hall at Linthal.
What will be provided at Hack4Glarus?
What do I need to bring to Hack4Glarus?
And there’s an extra: Fridolinpass.
If you have an extraordinarily good idea, apply for Fridolinpass! Write us why you should participate Hack4Glarus, and how will the community benefit from your hacking. For the ones who win Fridonlinpass we will cover her/his travel cost within Switzerland to Hack4Glarus.
How can I apply?
Apply here and submit your ideas!
Attention: only limited number of seats are available. Apply now!
]]>We want to especially thank the organizers Amin Bandali and Sacha Chua for their great effort and determination to bring a free Emacs conference to the world. Thank you very much! 🙏 🙇
Also, we want to say a big thank you to all guests and helping hands! Thank you for your insights and support. It was great fun and we're looking forward to seeing you at the next EmacsConf^^
Here are some impressions (best pictures are omitted due to personal 'no picture' policies):
Phil and Alain setting everything up
Tech check
Goodies
Venue
Introduction to organice
Coffee break
Registration
Announcement of FSF
EmacsConf is the conference about the joy of Emacs, Emacs Lisp, and memorizing key sequences.
If you liked this post, please consider supporting our Free and Open Source software work – you can sponsor us on Github and Patreon or star our FLOSS repositories.
]]>This is not a post complaining about how bad Slack/Skype/YourFavoriteMessanger™ is compared to IRC and that we should continue using IRC instead of them. I'm a pragmatic person and that battle is lost. Having said that, those other tools have some interesting capabilities that IRC lack - like notifications and calling people. Yes, I know about XMPP and Matrix, but like I said, this is not a post for idiologists saying there's better ways - it is for pragmatists who have to use these proprietary tools for one reason or another.
This is a post is about the discrepancy that most work on the computer is based on either text processing or text consumption. Messengers tend to fall into this category. Not for everything, because they include other features like file sharing and calls. However, people tend to also type a lot of text into those messengers. My claim is: Not having a general text editor at your disposal for when you have to input/manage loads of text is like being a carpenter and only having a hammer in the toolbox. It'll get the job done, but it's going to be a painful, redundant and shitty job.
Just look at this LifeProTip on Reddit. Literally zillions of people
figuring out the simplest of shortcuts. They learn that C-DEL
deletes a
word and start praying to the heavens how they could only have missed
out on something so basic. The article has ~50k upvotes and ~1k comments.
There's fun comments like "I don't understand how I can have spent the
last two years working as an editor without telling me this. I'm off
to cry in the toilets."1 And well, they are speaking about deleting
words instead of characters. There's so much more to discover for
them. I feel their pain.
These days, the basic notion of how you're supposed to enter text these days is to enter it into a text-area. It's the same for Slack, Skype, Web Mailers and so on and so forth. Personally, I strongly dislike typing text into text-areas. Maybe because I'm old and grumpy. But also because I'm typing text a lot. At the end of the day, typing text is my job. So it's only reasonable to want to be efficient at it.
So let's do something about that! As I said, many messengers do a fine job at other stuff, but they don't shine at manipulating text. So, let's solve only this particular issue and happily use the messengers for the other stuff. I've been dreaming about this for a longer time, but it turns out: There's an easy solution for that. Let me draw a diagram:
source 2
There's a fantastic program called Bitlbee which is "an IRC to other chats gateway". Then, there is another fantastic library called libpurple which "is intended to be the core of an IM program". It has support for loads of messengers built in - and then there's plugins for others (like Slack and Skype). Bridging Bitlbee and libpurple is already built in to Bitlbee.
This means you can actually just use your favorite IRC client to access all those messengers. My favorite IRC client is built into Emacs and is called ERC. Having an IRC client run within Emacs is ideal for our proposition, because Emacs is pretty ideal for manipulating text. Of course, there's other good options and you can chose whatever your favorite is. As long as your IRC client has a better input method than a text-area😏
This is what a typical session might look like:
On the right you see Emacs with two open buffers - both are connected
to Bitlbee/IRC and are a /query
to phil (who is the co-founder of
200ok). One is connected to Slack, the other to Skype. On the left,
you can see that the messages that I typed into Emacs on the right,
actually appear in the native clients, too. Of course, it also works
the other way around, for group chats, channels and so forth. This
screenshot is just a glimpse.
So let's get on with setting this up for you. From the description above, this might look like a lot of work, but it's actually not. We're standing on the shoulders of giants here. I got this to work within 20min without having a clue on how it worked before. Writing this blog post and documenting what and why I did it takes me way longer (; Using this tutorial, you set everything up within just a few minutes.
Let's start with the Bitlbee and libpurple setup. You can install them yourself, of use a pre-configured Docker container. This tutorial uses the latter option.
Let's create a folder with sub-folders for the Bitlbee/libpurple configuration:
mkdir ~/src/bitlbee mkdir config etc chown 777 config etc
Then let's create a docker-compose.yml
file within that folder which
will start Bitlbee/libpurple and all the messenger plugins:
version: "2.4" services: bitlbee: image: ezkrg/bitlbee-libpurple:201907221448 ports: - 6667:6667 volumes: - type: bind source: ./config target: /var/lib/bitlbee - type: bind source: ./etc target: /etc/bitlbee
Now, you can start the container. Then, you can log in using your favorite IRC client.
docker-compose up
Now, all that's left is to set accounts for the messengers that you want to proxy into IRC. Here's the quickstart documentation from Bitlbee. I'll show you how to set up one popular messenger: Slack.
&bitlbee
channel. This is where you do
all the Bitlbee configuration/administration./OPER register [a_safe_password]
account add slack [username]@[your_team].slack.com [your_token] account on
chat add slack general
save
NB: If you're also using ERC, you can save that a_safe_password
password by adding this line to your ~/.authinfo.gpg
:
machine localhost login "[$USER]" password [a_safe_password]
Now, you're all set. You can use regular IRC commands like /query
to
DM a person or /join #[channelname]
to join a channel.
Your settings are persisted through the mounted volume. On the next connect to Bitlbee via IRC, you'll be logged in and Bitlbee will automatically connect to Slack.
Now, what can you do about the apps of those messengers? Likely that depends on which ones you use and for what you're using them. Personally, I'm still running them in the background on my PC as well as on my phone and watch for the things that they do well: Notifications while I'm not on my PC, calls and file sharing. But for everything related to text, I'm using the new setup.
NB: Running the apps as next to this setup side-by works well - and having the docker container and Emacs open takes way less memory and CPU than just running Slack😏
If you're looking for an IRC channel to join - check out #organice on Freenode (bridged to #organice on Matrix). It's our new community chat for a free and open source implementation of Org mode without the dependency of Emacs - built for mobile and desktop browsers: https://github.com/200ok-ch/organice/
Enjoy and happy text processing!
If you liked this post, please consider supporting our Free and Open Source software work – you can sponsor us on Github and Patreon or star our FLOSS repositories.
For the curious: This graph has been created as an Org mode source block using DOT syntax:
digraph { edge [fontname="Bitstream Vera Sans"] node [fontname="Bitstream Vera Sans" shape="box" style="filled" fillcolor="dodgerblue" color="white" fontcolor="white" width="2.8" fixedsize="true"] slack [label="Slack"] skype [label="Skype"] anymessenger [label="YourFavoriteMessanger™"] bitlbee [label="Bitlbee / IRC" fillcolor="#29b96f"] slack -> bitlbee; skype -> bitlbee; anymessenger -> bitlbee; { rank=same; slack skype anymessenger } }
EmacsConf is the conference about the joy of Emacs, Emacs Lisp, and memorizing key sequences.
The Free Software Foundation has kindly provided us with a set of great goodies that we're giving away at the Zurich satellite. Here's a little preview:
If you liked this post, please consider supporting our Free and Open Source software work – you can sponsor us on Github and Patreon or star our FLOSS repositories.
]]>The Hackathon was fully booked with about a hundred participants; the whole symposium has up to 700 participants.
It's been a lot of fun - the event was professionally organized, the teams have been well prepared and had an edge, because all infrastructure has been provided by VIS up front. VIScon also is a real Hackathon in the sense that it's all about writing code and having fun - not about winning corporate prices.
Working with the teams has been great fun - and by sheer coincidence two of our teams were in the top 3 spots 😉🤣. Having said that, actually all teams did a really great job this year and we're happy to have been a part of it!
Here are some impressions from the weekend:
Alain with Max (organizer) and Fabian (team mentor)
Both Max and Fabian are former students of Alain. Big thanks for having me old geezer around! \(^_^)/
![](/img/2019/10/Photo 13.10.19, 15 46 51.jpg)
View over Zürich from ETH Semperaula
![](/img/2019/10/Photo 11.10.19, 16 50 24.jpg)
Initial kick-off and introduction on Friday
![](/img/2019/10/Photo 11.10.19, 17 57 23.jpg)
Hacking room on Saturday afternoon
![](/img/2019/10/Photo 12.10.19, 15 12 01.jpg)
Souvenirs
![](/img/2019/10/Photo 13.10.19, 18 36 43.jpg)
Aftermovie from 2018
]]>EmacsConf is the conference about the joy of Emacs, Emacs Lisp, and memorizing key sequences.
We are happy to announce the official EmacsConf 2019 Zurich (CH) satellite! For the satellite, there will be a physical venue where we will gather, watch remote and hold live talks whilst enjoying good discussions as well as food.
In accord with discussions with the organizers of the original virtual conference (Amin Bandali and Sacha Chua), the satellite will be an official EmacsConf venue.
The Zurich satellite will be free (as in beer) for guests. Venue and food will be provided by 200ok llc (200ok.ch) whose founders are very happy Emacs users.
The planning for the main conference is not finished, yet, but we do have a tentative schedule for the Zurich venue:
The schedule is subject to change and might slightly change alongside the lineup of the main conference.
The Zurich satellite venue has a maximum capacity of 30 people. We will open the registration for guests right away, but will limit the registrations to 20 people. 10 spots will be reserved for potential speakers who want to come to Zurich and hold their talk in front of a live audience. We kindly ask you, the speakers, to RSVP until October 10th for a spot at the Zurich satellite. After that, we'll open the remaining seats for guests and speakers alike.
We are very much looking forward to EmacsConf 2019 - to meeting great people, having interesting discussions and sharing all things Emacs and Lisp \\(^_^)/
Please RSVP on meetup.com or by emailing emacsconf-register@gnu.org.
Some early pictures of the venue:
If you liked this post, please consider supporting our Free and Open Source software work – you can sponsor us on Github and Patreon or star our FLOSS repositories.
]]>We'll meet on Monday, October 14, 2019 - starting at 6pm with an open end at Grand Café Lochergut. If you want to join, please create an RSVP at meetup: https://www.meetup.com/zh-clj-Zurich-Clojure-User-Group/events/264886560/
]]>Let's dig into one use case: Meetup is great to organize your local meetup. Meetup also has great sharing and API options. They do support embedding the meetup group together with the next meetup. However, there is no out-of-the-box support to embed your schedule to another website.
At 200ok, we support some websites that do want this kind of integration. Therefore we have written a simple microservice in Ruby with the Sinatra framework which solves this issue by accessing the Meetup calendar feeds API. This iCalendar2web microservice is for the times when you do have an iCalendar feed (for example a meetup group or a Google calendar), and you want to show them quickly on a website. The microservice either yields a ready to use HTML table that you can inject, or it returns the events in a simplified JSON structure.
You can host this project yourself or you can use a hosted SaaS version (see below).
Please find the repository here: https://github.com/twohundredok/iCalendar2web
For example Lambda Zen Temple posts the daily meditation schedule and several other events to Meetup and integrates the same events into their website. You can see this as a demo here.
Request:
GET https://icalendar-to-web.herokuapp.com/calendar/Zen-Meditation-Schweiz?filter=retreat&format=json
Results:
[ { "name": "One-Day-Retreat", "start_time": "2019-08-25T10:00:00.000+02:00", "end_time": "2019-08-25T17:00:00.000+02:00" }, ... ]
You can do the same for any calendar application that exports iCalendar via HTTP. For example, if you want to embed a Google calendar filtered down to specific keywords to your website.
iCalendar2web is generic in that it can render any public Meetup schedule. Instead of installing iCalendar2web yourself, you can use this hosted version:
https://icalendar-to-web.herokuapp.com/calendar/YourQualifier
With YourQualifier
being the Meetup Group URL, eg
"Zen-Meditation-Schweiz".
If you want to embed it to your own website, use an iframe - similar to Youtube:
<iframe width="800px" height="800px" src="https://icalendar-to-web.herokuapp.com/calendar/Zen-Meditation-Schweiz" frameborder="0"> </iframe>
You can also load the html via Ajax into your page. For example if you want to show a loading spinner upfront. The required access-control headers are set on this service, so no worries about CORS.
$.get("https://icalendar-to-web.herokuapp.com/calendar/MyMeetupGroup", function(data) { $("#my-schedule").html(data); });
For iCalendar2web to run, you need to set a base URL. Most generic calendar applications will export their various calendars using a URL scheme.
For example for meetup.com:
export ICALENDAR_URL="https://www.meetup.com/PLACEHOLDER/events/ical/"
For example for a Google calendar:
export ICALENDAR_URL="https://calendar.google.com/calendar/ical/PLACEHOLDER.calendar.google.com/public/basic.ics"
The PLACEHOLDER is the param you enter via YourQualifier
.
iCalendar2web takes the following parameters:
filter
: takes a RegExp to filter the name of the meetup
show_from_to
: when set, this changes the default table columns
from
date | time |
to
date from | date to |
limit
: set a limit of how often a specific meetup shall be repeated
in the table, the default is 25. This flag is not supported for the
json
response.format
: when set to json
, it returns the results as JSON and not
as an HTML table/calendar/YourQualifier
will show all events with the table columns
date | time |
/calendar/YourQualifier?format=json
/calendar/YourQualifier?limit=5
will show up to 5 events per category with the table columns
date | time |
/calendar/YourQualifier?limit=5&filter=retreat
will show up to 5 events that include 'retreat' in the title with the table columns
date | time |
/calendar/YourQualifier?limit=5&filter=retreat&show_from_to=1
will show up to 5 events that include 'retreat' in the title with the table columns
date from | date to |
Please refer to the documentation of Heroku to install
Install Ruby (using rbenv in this example):
rbenv install
Install dependencies:
gem install bundler bundle
Configure your iCalendar target URL by setting (more information see Configuration):
export ICALENDAR_URL="https://www.meetup.com/PLACEHOLDER/events/ical/"
Run the application:
rackup config.ru
If you liked this post, please consider supporting our Free and Open Source software work – you can sponsor us on Github and Patreon or star our FLOSS repositories.
For about 5 years, I used the proprietary tools Things for GTD and Evernote for reference material. Life was good. Until it wasn't. With different upgrades of macOS or the tools that I've bought, the integration points faltered. Links between the apps stopped working, creating new links took half a minute, data got lost or had to be re-entered. I also had to re-buy the software multiple times, because the new version wasn't just an upgrade, but a completely new app whilst the old one stopped working. Something like this happens quite frequently. For example, last month Microsoft shut down the popular application Wunderlist which they only just bought 4 years ago. 4 years is not a long halftime to build up a trusted system.
I'm not ranting about having to pay for software. In fact, I'm very happy to pay for it. Writing software is also what I do for a living. However, software should empower certain freedoms. For example, I didn't have the freedom to study how these apps worked and to fix what was broken. I didn't have the freedom to keep running an old version when I liked that one better than the new one. These issues are intrinsic to how macOS and iOS native apps function. There's little one can do about it.
For these and other reasons, I'm using Debian full time since about 5 years, again. That's what I was happily using before my macOS stint and it still makes me happy.
Emacs and Org mode have superseded Things and Evernote for me. It's not only Free Software (see the GNU definition for what that entails), it enables my most critical information to be more easily managed in text files using version control. That means, I'll always have an easy upgrade path. Or I can just keep using the software I'm now using forever.
There's loads of good and great documentation and success stories out there concerning Emacs and Org mode. For example, at 200ok, we've written about both(Emacs, Org Mode) before. We're so happy with this stack, that we even co-organized EmacsConf 2019.
On to the topic at hand. Emacs is great, but it's desktop software. What are the options to edit Org Mode files whilst on the go on an iPhone? For the past year, I've been a partially happy user of Beorg. It's a native iOS app. When it was new, I couldn't use it, because after parsing and writing my 10k LOC of Org Mode files, they were completely messed up. I did donate a couple times to the project, though, because I liked the idea of having such an app on my phone. And happily enough, the author kept working on it (which is uncertain for closed source apps) and it got to the point where I could use it. Using it still entails certain problems. For example, Beorg indents certain Org features differently within the Org file which constantly results in a bigger than needed diff. There's other things that I could mention, but I do not want to rant about the work of another developer who I'm very grateful to.
However, Beorg is closed source. Even though it uses Org markup , it still shares some of the traits of Things. I cannot fix what is broken for me, I have to pay for upgrades, I cannot read the code and understand what the application is doing with my data, I cannot improve the software for my own workflows and I always have the uncertainty that it will stop working when the author discontinues to compile it for upcoming versions of iOS. The latter is a real issue, too. For example, Apple pulled a lecture capture application we've build last month. It was feature complete, so we haven't changed it in three years, but there was a steady user base. Well, that is until Apple pulled the App from their store. And getting it updated with their new requirements is non-trivial and will likely not happen any time soon, because there's other things to do.
For this reason, we have started https://github.com/200ok-ch/organice. organice is a Free and Open Source implementation of Org mode without the dependency of Emacs. It is built for mobile and desktop browsers and syncs with Dropbox, GitLab, WebDAV and Google Drive.
At 200ok, we run an instance of organice at https://organice.200ok.ch, which is open for anyone to use! organice does not have a back-end (it's just a front-end application, which uses different back-end storage providers). We don't store any kind of data on our servers - we also don't use analytics on organice.200ok.ch.
Documentation: https://organice.200ok.ch/documentation.html
Community chat: #organice on IRC Libera.Chat, or #organice:matrix.org on Matrix
If you liked this post, please consider supporting our Free and Open Source software work – you can sponsor us on Github and Patreon or star our FLOSS repositories.
Lastly, some impressions of organice in action:
My main GTD file, completely folded
Drilling down into one sub header
Example of a daily agenda
]]>I decided that I need to change something about my GPG setup. I was still using a 1024bit DSA key from 2010 which means: Even if I create new and stronger subkeys, my signatures would forever be weak.
Since upgrading my old primary key was a non-trivial task, I'm writing this blog post for future reference by me or you.
First off: You cannot really upgrade a GPG primary key. You can create new subkeys which have stronger encryption, but those will be signed by the primary key. So holistically speaking, this is a bad situation.
If you want to get out of that hole, you'll have to:
These are very good tutorials:
One suggestion that are not mentioned in these two links: Use an expiration date less than 2 years into the future. This acts as a 'dead mans switch'. As long as you have access to your private key, you will always be able to extend the expiration date - even if it has passed. If you do, it's prudent to also set a reminder to extend the expiration date in the future.
I had to change configuration in:
~/.gitconfig
signingkey
under [user]
~/.gnupg/gpg/conf
encrypt-to
and default-key
settingsDon't forget to change your MUA configuration, too. I'm using Mu4e (Mu 4 Emacs), you can find my configuration here. This is a good time to test some mails to yourself and see if signing/encryption works as expected.
I also had to make changes to other files you might not have. For
example I'm using ~/.authinfo.gpg
which holds server credentials
(such as SMTP or IRC). I also have some other encrypted files which
hold personal data. I decrypted all of those and re-encrypted them
with the new secret key.
Think of all applications you're using that might require a current PGP public key. Some could be:
Export your public key and add it to those applications:
gpg --export --armor [your_fingerprint]
Upload your key to some keyservers - especially the ones that you've been using before. For example:
gpg --keyserver pgp-mit.edu --send-keys [your_fingerprint] gpg --keyserver hkp://pool.sks-keyservers.net --send-keys [your_fingerprint]
If you ever lose your secret key or it gets compromised in any way,
it's good to have a revocation certificate handy. If you're using a
'newer' version of GnuPG (> 2.1), this happened automatically when you
created a new key (step 1). You will find it in
~/.gnupg/openpgp-revocs.d/
. If it's not there, create one using:
gpg --output revoke.asc --gen-revoke [your_fingerprint]
It's important to back up your PGP keys - for example by printing a hardcopy.
I looked up who signed my old key (gpg --list-signatures
[your_old_fingerprint]
) and sent those people an email. You can use
mine as a blueprint:
From: Alain M. Lafon <alain@200ok.ch> To: Alain M. Lafon <alain@200ok.ch> Bcc: [everyone who signed my old key] Subject: New PGP key Dear fellow PGP user After talking to some people more knowledgeable than me on GPG, I decided that I need to change something about my GPG setup. I was still using a 1024bit DSA key from 2010 which means: Even if I create new and stronger subkeys, my signatures would forever be weak. Therefore I decided: - To create a new primary key - Sign my new keys with my old keys to prove the identity behind the new key - Inform the people who signed my old keys that I've got a new one and kindly ask that they sign the new one, too Since you signed my old key, you're receiving this email. If you are not interested in getting/signing my new GPG key, I deeply apologize for the spam message - you can safely ignore this email and stop reading here. The old key will continue to be valid for some time, but I prefer all future correspondence to come to the new one. I would also like this new key to be re-integrated into the web of trust. This message is signed by my new key which itself is signed by my old key to certify the transition. The old key was: pub 1024D/C5833B41 2010-12-08 Key fingerprint = 79D6 2944 374F 5C7A A4DF 71CD E87B 13F0 C583 3B41 And the new key is: pub 4096R/8E1FC0E9 2019-07-17 Key fingerprint = D465 337B 218A 0216 ECDC 368E 1370 99B3 8E1F C0E9 To fetch the full key from a public key server, you can simply do: gpg2 --keyserver hkp://pool.sks-keyservers.net --recv-key 'D465 337B 218A 0216 ECDC 368E 1370 99B3 8E1F C0E9' Alternatively, my old and new keys are on the following keyservers: - http://pgp.mit.edu/pks/lookup?search=alain+m.+lafon&op=index - http://hkps.pool.sks-keyservers.net/pks/lookup?search=alain+m.+lafon&fingerprint=on&op=index I also uploaded the public key to my companies (200ok llc) website with information on my fingerprint for additional insurance for you: https://200ok.ch/team.html If you already know my old key, you can now verify that the new key is signed by the old one: gpg --check-sigs 'D465 337B 218A 0216 ECDC 368E 1370 99B3 8E1F C0E9' If you are satisfied that you've got the right key, and the UIDs match what you expect, I'd appreciate it if you would sign my key. You can do that by issuing the following command: gpg --sign-key 'D465 337B 218A 0216 ECDC 368E 1370 99B3 8E1F C0E9' I'd like to receive your signatures on my key. You can send me an e-mail with the new signatures: gpg --armor --export 'D465 337B 218A 0216 ECDC 368E 1370 99B3 8E1F C0E9' | gpg --encrypt -r 'D465 337B 218A 0216 ECDC 368E 1370 99B3 8E1F C0E9' --armor It's helpful to disable old keys to make sure that future communication gets encrypted for the right key: $ man gpg2 [...] A disabled key can not normally be used for encryption. [...] $ gpg --edit-key 79D62944374F5C7AA4DF71CDE87B13F0C5833B41 [...] pub dsa1024/E87B13F0C5833B41 [...] gpg> disable gpg> save Thank you very much for your time and consideration! Best regards Alain
If you want to get in touch, you can securely contact me using PGP(;
Fingerprint = D465 337B 218A 0216 ECDC 368E 1370 99B3 8E1F C0E9
Key ID = 0x8E1FC0E9
https://200ok.ch/pgp_keys/pubkey_alain.asc
Thank you Pascal Huber and Max Schrimpf for reading drafts of this!
Thank you Tomáš Pospíšek for recommending changes to the article!
There's one little UX pitfall into which I see people stepping over and over (I'm no exception here). If you've seen issue updates like this, you know what I mean:
The reason is that when working in the /project board/ view, the preview of an issue has "Close issue" as primary call to action. During planning and meetings, people move from issue to issue and want to "close" the current issue, so that they can go to the next one. However, this doesn't close the preview (the little 'x' on the top right does that), but it closes the issue. Here's such a screen:
The remedy is easy. Modern browsers have been shipping with it since the early days: User Style Sheets! If you don't know where they reside on your system, this article explains how to add them to different browsers.
Just add this snippet of CSS to your user style sheet, restart the browser, and the "Close issue" button will be gone.
Happy project planning!
/* Do not display Github Issue "Close" button in projects view */
#issue-state-button-wrapper button.js-comment-and-button {
display:none !important;
}
]]>
offlineimap
, I got the following error for one of my accounts:
Establishing connection to imap.redacted.ch:993 (redacted-Remote) ERROR: Unknown SSL protocol connecting to host 'imap.redacted.ch' for repository 'redacted-Remote'. OpenSSL responded: [SSL: DH_KEY_TOO_SMALL] dh key too small (_ssl.c:727) ERROR: Exceptions occurred during the run!
If you have the following error, let me save you some time with your
favorite search engine: The reason is that "newer" versions of OpenSSL
fend of a TLS attack called FREAK (Factoring RSA Export Keys). When
you get this openimap
error, it means that you're encrypting the
connection to your mail server with TLS whilst using a key smaller
than 768 bytes. This connection can be attacked and is therefore
considered unsafe. That's why OpenSSL will terminate this connection
by default instead of trusting it. Read more about this attack on a
blog post on openssl.org from 2015.
If you own the mail server yourself or have some kind of authority over it, please don't use the workaround I'm proposing here, but upgrade your mail servers security. As mentioned above, OpenSSL wrote about and fixed this issue in 2015 - so it's about time for sysadmins to follow up on this.
In my case, I
don't have authority over the mail server in question (it is an
Outlook server of a big corporation). If you're in the same boat, the
'fix' is simple: Ignore the error by falling back to an old
authentication scheme tls1_2
. For that, open your .offlineimaprc
configuration file, go to the section [Repository yourServer-Remote]
and add a line ssl_version=tls1_2
. The full entry will look like this:
[Repository redacted-Remote] type = IMAP remotehost = imap.redacted.ch remoteuser = me@redacted.ch remotepass = ... ssl_version=tls1_2 [more customizations]
Good luck and enjoy reading mails from insecure servers^^
If you're curious about my mail setup, let me elaborate a little on that: After downloading emails with Offlineimap, I read and answer them using Mu4e and Emacs. This is by far the best email setup that I have worked with in 20 years of using email on a daily basis. You can find my Mu4e Emacs configuration here: https://github.com/munen/emacs.d/#mail
]]>Sicherlich ist es Ihnen auch schon passiert: Ein Angestellter, welcher an einem Wochentag immer dieselbe Schicht hat, gibt Ferien ein. Sie generieren den Einsatzplan und merken erst danach, dass dieser Angestellte, der nun in den Ferien eine Schicht zugewiesen bekommen hat.
Um dies zu vermeiden gibt es nun eine Option auf Betriebs-Ebene: Ferien von Mitarbeitenden als wichtiger einstufen als Schichten. Ist diese Option aktiviert, werden Mitarbeitende vor der Generierung automatisch aus denjenigen Schichten entfernt, bei welchen sie in den Ferien sind. Diese Option wurde von uns für alle bestehende Betriebe aktiviert!
Wir haben eine neue Regel geschrieben, welche probiert jedem Mitarbeitenden pro Einsatzplan mindestens ein ganzes (Samstag und Sonntag) Wochenende frei zu geben. Diese Regel fördert eine gerechte Verteilung der Wochenende. Für bestehende Betriebe ist sie ausgeschaltet. Sie kann bei den Optimierungs-Regeln aktiviert werden.
Seit einiger Zeit gibt es in der Ansicht "Einsatzplan vorbereiten" die Möglichkeit einen Feiertag zu erstellen. An einem Feiertag werden alle Schichten ausgeblendet. Diese Option können Sie verwenden, wenn Ihr Betrieb an einem spezifischen Tag geschlossen hat.
Wollen auch Sie Ihre Einsatzpläne vollautomatisch mit QuickShift erstellen? Dann schreiben Sie uns eine Email oder erstellen Sie Ihren Betrieb selbstständig und wir werden uns schnellstmöglich mit Ihnen in Verbindung setzen.
Clojure and ClojureScript provide fine-grained control over the state of a running application. Especially when combined with a reactive front-end framework like React, features like hot-swapping code reach far beyond simple live reloading.
Applying a reactive paradigm means that changing the data updates the user interface. Clojure's functional nature, with its strict separation of data and code, lets us use the reactive paradigm for development as well: changing the code updates the user interface.
This works not only in the front end of a development environment. Using this mechanism over a networked REPL provides the same capabilities in the back end of a production environment.
In an example-driven approach we will explore several typical situations in software development in which Clojure helps us to speed up not only our development cycle but also DevOps.
]]>This is a copy of the article:
Mit Progressive Web Apps (PWAs) verschwimmen die Grenzen zwischen responsiven Websites und Native-Apps. Doch nicht für jeden Zweck ist eine PWA die bessere Anwendung. Wo die Stärken und Schwächen von PWAs liegen und warum offene Standards wichtig sind, erklärt Phil Hofmann, Dozent an der ZHAW und Gründer von 200ok.
Phil Hofmann: PWAs versprechen das Beste aus zwei Welten: Es stehen immer mehr APIs zur Verfügung, mit denen Webapplikationen die nativen Features der Endgeräte nutzen können. Im Entwicklungsprozess kann man die Vorteile von Webtechnologien voll nutzen und sich dabei die teilweise mühseligen Mechanismen zur Veröffentlichung nativer Apps sparen. Zudem sind die PWAs niederschwelliger, da die App nicht erst über den Store gefunden und installiert werden muss. Etablierte SEO-Strategien können daher im vollen Umfang zum Bewerben von PWAs genutzt werden.
Das kommt wie immer auf den Anwendungsfall an. Manche Features lassen sich schon ganz gut nutzen, bei anderen hakt es noch. Aber die Möglichkeiten von PWAs werden immer vielseitiger. Gerade vor ein paar Wochen wurde die experimentelle "Web Share Target API" veröffentlicht. Mit ihr ist es nun nicht nur möglich, aus einer PWA das native Sharing-Menü zu öffnen, sondern ebenso die eigene PWA für dieses Menü zu registrieren. Ein Feature, das ich in PWAs schon lange vermisst habe.
Der Begriff "progressiv" wird in diesem Zusammenhang häufig wegen des Konzepts "progressive enhancement" verwendet, das sich vielleicht am besten mit "fortlaufende Verbesserung" übersetzen lässt. Die Basisfunktionalität wird dabei mit Standard-Webtechnologien aufgebaut, die mit steigendem Zugriff auf native Funktionalitäten kontinuierlich verbessert werden. Aus meiner Sicht ist das die positive Formulierung des Konzepts "graceful degradation", das es im Zusammenhang mit Webtechnologien schon lange gibt und das sich mit "eleganter Funktionsverlust" übersetzen lässt. Der Begriff wurde in einer Zeit geprägt, in der die Minimalstandards der Browser in Bezug auf Funktionalitäten, aber auch Sicherheit, noch wesentlich geringer waren und man sich sorgte, dass Nutzer in der Konfiguration ihrer Browser Javascript deaktiviert haben könnten. In diesem Fall sollte die Website dann trotzdem noch funktionieren. Darüber spricht heute keiner mehr. Javascript nutzen alle. Die Frage ist, welche APIs unterstützt der Browser, auf dem die PWA gerade läuft.
Das ist völlig abhängig vom Anwendungsfall. Es gibt Fälle, die sich aus technischen Gründen nur mit einer Mobile-App umsetzen lassen, aber diese Fälle gibt es ebenso für Webapplikationen. Manchmal braucht es beides. Im Grunde sind es Technologien, die sich gut ergänzen.
Es gibt eine Statistik, die man in diesem Zusammenhang immer wieder hört: "Der durchschnittliche Benutzer installiert 0 Apps pro Tag." Dadurch, dass PWAs letztlich nur Webseiten sind, verspricht man sich eine grössere Reichweite, weil der Installationsvorgang viel niederschwelliger ist. PWAs sind ausserdem deutlich kleiner als kompilierte Native-Apps, was das Installationserlebnis weiter verbessert und Bandbreite beziehungsweise Datenvolumen spart. Updates von PWAs sind daher auch weniger schwerfällig als Updates von nativen Apps über den Store. Grundsätzlich begrüsse ich die Möglichkeit zum Einsatz der offenen Standards, auf denen auch das Web aufgebaut ist, und sehe mehr Potenzial bei der Integration von PWAs mit Webapplikationen und untereinander. Das Fehlen von gemeinsamen Standards hemmt diese Entwicklung im Kontext nativer Apps.
Die grosse Unbekannte ist die Verfügbarkeit von APIs. Auf den proprietären Plattformen fällt es den grossen Playern offensichtlich leichter, sich auf eine stabile API festzulegen. Bei den APIs, gegen die man mit einer PWA programmiert, bleibt einem oft nur zu hoffen, dass diese weiter unterstützt werden. Ein weiterer, häufig diskutierter Nachteil ist, dass abhängig davon, wie man seine Zielgruppe erreicht, eine Präsenz im Store aber wünschenswert ist. Manche Zielgruppen erwarten eine Native-App und suchen direkt im Store. Google hat unlängst auf diesen Einwand reagiert und bietet seit Kurzem auch die Möglichkeit, PWAs in den Google Play Store zu stellen – eine etwas absurde, aber nachvollziehbare Entwicklung.
Bei Verfügbarkeit der notwendigen APIs lassen sich Anwendungen mit beschränktem und klar abgegrenztem Funktionsumfang, abgesehen von ein paar Spezialfällen, gut mit PWAs umsetzen.
Spezialfälle könnten Anwendungen in Bereichen sein, in denen es beispielsweise auf Leistung oder Sicherheit ankommt. In diesen Fällen können die offenen Standards des Webs mitunter zu suboptimalen Lösungen führen. Bei Projekten, für die der Funktionsumfang im Vorfeld nicht klar abgegrenzt werden kann, kann sich die Entwicklung einer PWA als Sackgasse erweisen, nämlich dann, wenn sich im Verlauf der Entwicklung Abhängigkeiten auf Schnittstellen ergeben, die nur Native-Apps zur Verfügung stehen.
Eines der Probleme, das sich vermutlich nicht einfach lösen lässt, ist das der Webviews. Webviews sind in Native-Apps eingebundene Browser-Fenster, die zur Anzeige von Webseiten genutzt werden. Webviews werden zum einen eingesetzt, um die Usability zu verbessern, und zum anderen, um zu verhindern, dass der Benutzer etwa beim Folgen eines Links die Anwendung verlässt. Der beschränkte Funktionsumfang dieser Webviews untergräbt das PWA-Konzept, weil dann nicht die notwendigen APIs zur Verfügung stehen.
Die grossen Player haben sich mit ihren Stores für Native-Apps schon immer ein Wettrennen geliefert. Vermutlich verspricht sich Google hier einen Vorteil, indem es die Vorreiterrolle für ein App-Format einnimmt, das die Einstiegshürde zur Veröffentlichung dauerhaft senken könnte.
Vermutlich aus demselben Grund. Google hat die Vorreiterrolle, also muss Apple auf die Position "Not invented here" gehen.
]]>Ich glaube nicht, dass es hier wirklich ums Durchsetzen geht, da ich es eher als ein Ergänzen von Technologien sehe. Aber natürlich würde ich mir wünschen, dass auch die grossen Player den Einsatz offener Standards unterstützen.
This feature is called "flat volume" and seems to be targeted to users who do not understand the concept of a mixer. For example someone might turn up the volume of an application (i.e. Spotify or Youtube) all the way up while the system volume is very low or even at zero and wonder why "it" isn't working.
On a first glance, this might seem reasonable. However, speaking from personal experience, it's actually physically dangerous. Some applications take pro-active control over the volume (for example Zoom). If you're wearing headphones and some application forces the volume to 100%, you might be in for some ringing ears or even permanently damaged hearing. Another reason is that you might actually want to run multiple applications on difference volume settings, because not all sound is mixed at the same levels - so "100%" for one source might be a lot fewer decibel than "50%" for a different source.
For me, on Debian, the "flat volume" feature was enabled by default
and has hurt me a couple of times. Fortunately it's very easy to turn
off. You can set all kinds of flags in ~/.pulse/daemon.conf
for
PulseAudio to change it's behavior.
flat-volumes=no
Then restart PulseAudio with pulseaudio -k
.
NB: This is not a new topic - in fact, there's discussions[i.e. 1,2,3] and bug reports for different Linux distributions. The issue seems to be that the upstream maintainer set the default and some distros are very reluctant to change a default away from the upstream.
You're good to continue hacking - with earphones on!^^
Update 2019-09-27: Thank you Mr. Alberto González Palomo (https://matracas.org/) for pointing out this issue isn't new, at all.
]]>The project was originally created in a funded company in which Phil and Alain were sharing the role of CTO. Many big publishers (i.e. Frankfurter Buchmesse, Fraunhofer Institute, Deutscher Ethikrat) have used the platform in the past to create and distribute their content in an open manner.
So here it is: https://github.com/voicerepublic/voicerepublic_dev
The ongoing plan is for OpnTec and FossAsia to build a team that runs voicerepublic.com as a service and further develop the platform with the community. 200ok will continue to be a contributer in building this community and bringing VoiceRepublic into a new free and open future!
You can join the VoiceRepulic channel here: https://gitter.im/VoiceRepublic/VoiceRepublic
If you're curious and want to hack away on it, a good starting point is the vr-devbox repository which sets up a dev VM using vagrant.
![](/img/products/voicerepublic_logo.png) ]]>Generally speaking I liked the idea of having my code automatically
formatted in Emacs whilst adhering to configured linters such as
eslint. However, from the docs it was much too
complicated to use. More importantly, though, the documentation asks
to configure it through files which are usually checked in (like
package.json
or .eslintrc.json
). For new or our own projects, this
is fine. However, for customer projects, I don't want to impose new
tools. Additionally, it is a whole lot of repetitive work to set up
such tooling for every single project which kind of defeats it's
ulterior motive. In these days, who doesn't work on a couple dozen
code bases in parallel?(;
Therefore, I did what every self-respecting engineer would do in this
scenario: I wrote a little Elisp wrapper. This wrapper implements an
interactive function autoformat
which is a thin wrapper around
command-line based code autoformatters which it utilizes through a
strategy pattern. At this moment, the tools prettier
and
prettier-eslint-cli
are implemented. With those, autoformatting a
wide variety of languages/formats like JS, CSS,Sass,HTML, JSON and
many more is possible. To add a new language/framework, just add a new
strategy function which yields the a command-line tool that adheres to
this workflow: Reads source code from stdin
, formats it and passes
it to stdout
.
You can find the documented code in my Emacs repository: https://github.com/munen/emacs.d/#auto-formatting
If you like it, feel free to fork the repository, make pull requests or just give it a star^^
Happy hacking!
]]>Livingdocs is a growing Startup based in Zurich. Their product is a modern Web Content Creation and Publishing System, in use at large corporations. However, if you have any use cases for CMSs (and who doesn't?^^): Their product is also great for smaller and bigger publishers alike - you can sign up to their SaaS product for free and try it out yourself!
]]>There was one thing that I never got properly to work: Bluetooth. The new 6th gen uses a different chipset than the older models which I saw working just fine on my colleagues machines. It turns out, that the new chipset also did work fine under old Kernels (I tested 4.9) while on newer ones (I tested 4.16 and 4.18) it's just behaving weirdly. Sometimes it might find devices, sometimes it might pair, sometimes it might work for a couple of seconds, but mostly it will not work.
After some time researching the issue, because I didn't want to run a 2 year old kernel if I didn't have to, I found this discussion on Launchpad: https://bugs.launchpad.net/ubuntu/+source/linux-firmware/+bug/1729389
Turns out, the Intel developers know about the problem and it's
already fixed in some downstreams of the kernel. If it isn't working
for your Distro of choice, you can just change one setting in the
kernel module. Add this one line to your
/etc/modprobe.d/iwlwifi.conf
:
options iwlwifi bt_coex_active=0
Create the file if it isn't already there (it wasn't for me on Debian
Testing). Reboot and check if the bt_coex
option is disabled:
cat /sys/module/iwlwifi/parameters/bt_coex_active
N
If you're curious what the flag is about, you can read a good explanation on Superuser.com.
You're good to go now. Enjoy all the BT goodness of Audio, input devices and so forth.
]]>We from 200ok will be there as well - as are our friends from the Insopor Zen Academy. Join us for three great days of hacking, fun and learning!
Last time we built and released our open source Crowdfunding and and Equity funding platform Swiss Crowdfunder which was immediately successfully used to raise over a quarter million Swiss franks for the Data Center Light. The event itself was really successful and we got a great article in the newspaper Südostschweiz about 200ok and ungleich.
Join the event now on the dedicated page or on Meetup.
Where does Hack4Glarus take place?
It will happen at Spinnerei Linthal, a very cool old factory hall at Linthal.
What will be provided at Hack4Glarus?
What do I need to bring to Hack4Glarus?
And there’s an extra: Fridolinpass.
If you have an extraordinarily good idea, apply for Fridolinpass! Write us why you should participate Hack4Glarus, and how will the community benefit from your hacking. For the ones who win Fridonlinpass we will cover her/his travel cost within Switzerland to Hack4Glarus.
How can I apply?
Apply here and submit your ideas!
Attention: only limited number of seats are available. Apply now!
]]>Today, we open sourced our tool to synchronize a time CSV sheet to letsfreckle.com: https://gitlab.com/200ok/csv2letsfreckle
We hope it can make your life a little less repetitive and give you more time to work on your own projects!
]]>mu4e-view-prefer-html
, but there's probably few of us who would do
that.
However, you might still see a whole lot of HTML emails. And when you check if they have a plain text version, they might have one! There's a reason for that. MU4E has a 'HTML over plain text' heuristic with this official rationale:
Ratio between the length of the html and the plain text part below which mu4e will consider the plain text part to be 'This messages requires html' text bodies. You can neutralize it (always show the text version) by using `most-positive-fixnum'.
This heuristic overwrites the default setting (and configuration) that Plain text should be preferred over HTML!
In my experience, HTML Emails are WAY longer than only 5x the Plain text (Doodle, Airbnb, Meetup, etc), so this will yield me a lot of false positives whereas I have never seen a "This message requires HTML" body. Since I realized that MU4E has this heuristic, I overrode it just like the doc string told me to and am an even happier MU4E user.
(setq mu4e-view-html-plaintext-ratio-heuristic most-positive-fixnum)
NB, if you want to be able to read HTML emails, that's totally and 100% supported within MU4E! You can render them as:
For these and other goodies in MU4E, please have a look at my configuration: https://github.com/munen/emacs.d/#mu4e
]]>Here's the gist of what it gives you:
The project is hosted here: https://github.com/200ok-ungleich/swiss-crowdfunder/.
Here are the full release notes:
Added
Changed
Enjoy^^
]]>The Hackathon is fully booked with about a hundred participants. So far, it has been an awesome event! From the obvious points like the location (ETH), to catering (round the clock fresh food and drinks) and the friction-less and professional organization, it was just a great way to spend the weekend.
Here are some impressions:
Flyer
Thank you, Max Schrimpf (vice president of VIS), for inviting me as a mentor!
]]>Based on the Intel datasheet on my CPU (i7-8550U), it seems that with the 'fix' enabled, the CPU potentially draws a lot more power for longer times than it is designed to do (see the TDP numbers in the sheet). This whole clocking issue seems to be an issue for people that really know what they're doing - and I feel I don't have all the information to make a reasonable decision to overrule the defaults. I have the feeling that the CPU can do 4GHz, but isn't supposed to do it for longer than a couple of seconds, because then it would get too hot and draw too much power.
Since I was content with the speed of the machine before the fix and I don't want to run the danger of melting my machine, I'm personally going to disable the fix. I know that some people have been running with the 'fix' present on their machines for months, so I'm not saying that it's the wrong thing to do. I'm just personally going to tread on the safe side.
Since the 'fix' ran through an install.sh
script and wasn't
available as a package, uninstalling has to be done by hand. However,
reading the install.sh
script reveals all the points that need to be done:
systemctl stop lenovo_fix.service
rm /etc/systemd/system/lenovo_fix.service
rm /etc/lenovo_fix.conf
reboot
The above script leaves the three packages installed through pip and whatever OS packages you installed still on your system. I don't want to issue a remove script for those, because you might be using them for something else and I don't want to break your system.
]]>If you're not running the latest BIOS version (1.30), we strongly recommend upgrading. The most pressing issues under Linux that have been solved since 1.23 are:
You can upgrade your BIOS (without Windows) by following these instructions:
There's one big caveat left even in the latest BIOS version, though. When you're running a CPU intensive task, your machine will be only about 50% as fast as it could be!
Let that sink in for a moment.
The reason is that under Linux the CPU frequency will be throttled very early - when reaching a temperature of 75 or 80C. The standard setting in Windows is 97C. Therefore, you might be running only on half the CPU frequency which you could, for many workloads.
Let me show you a couple screenshots:
Running sysbench on 8 threads without the fix
As you can see, it takes my machine 43s to finish the benchmark.
Running the same sysbench benchmark with the fix
Now, the same calculation takes only 18s - less than half the time before! These are reproducible numbers. We realize that the benchmarks would otherwise be rather short. It's the same result when the benchmarks run substantially longer, too.
If you want to install the workaround yourself, here's the fix which luckily was already provided by Francesco Palmarini: https://github.com/erpalma/lenovo-throttling-fix
There's one slight adjustment we made, in order to have a longer
battery life: In /etc/lenovo.conf
, I set the cpu temp threshold for
battery to 80C instead of 85C.
Let me give you an arbitrary example to give you a sense of what kinds of situations I'm talking about: You often need to run some sorts of Docker containers (for example for the databases) and then some services which you actually develop - let's say a server and a front-end. In such a scenario, there's lots of redundant typing to be done every time you switch to this project:
Like I said, this is completely arbitrary - there's gazillions other complicated setups out there.
This manual mess can be solved by adding a custom function around tmux
into your .sourceme
file:
startup () {
tmux new-session -d 'sudo service postgresql stop; docker-compose up'
tmux split-window -v 'npm run watch'
tmux split-window -h 'cd ~/src/200ok/some-service/front-end; npm run start'
tmux attach-session -d
}
echo "Supported commands: \n"
echo " - startup: Starts tmux, docker-compose, back-end and front-end"
So now I only have to start startup
and I get three terminals in
one, all already running the right services.
They canonically look like this:
#+NAME: <name>
#+BEGIN_SRC <language> <switches> <header arguments>
<body>
#+END_SRC
This works great when you want to add either a bigger snippet for an exported document or you want to do actual calculations with data from the document.
For the latter case, if you don't need a whole lot of room for your code - for example if you're doing an easy calculation like the VAT on an invoice, it would be great if you had the ability to add inline code blocks. And of course, Org mode provides!
Anywhere in an Org mode document, inline code snippets can be embedded
like this: src_elisp{( + 1 2)}
When you hit C-c C-c
while point is on this code, it will execute
and append the result. When exporting, the code block is not visible
while the result is.
Carl has studied computer science with Alain in Stuttgart. Apart from that, Carl and Alain already started working together professionally on their first serious startup MVP close to 10 years ago. It's a great pleasure that he's adding his expertise to the 200ok team, now! In fact, we're a little late in writing this post as he already started in July^^
Carl is an independent IT-Consultant from Northern Germany who holds a BA in Politics and Public Administration as well as in Applied Computer Science. He worked for nearly a decade in the software industry starting out his career on applications servers slinging J2EE code and just a few years ago falling in love with programming again through the power of Lisp or more specifically Clojure. He also enjoys operations work and has spent a considerate time working in a high traffic, high availability and very cloudy environment.
]]>Generally speaking, functional programming can be boiled down to the idiom that you write your programs through intense usage of a few versatile data structures and many composable functions. Clojure is such a language. JavaScript initially was created with Scheme and Self in mind, therefore it supports both the paradigms of functional programming and object orientation. Modern JavaScript has lots of syntax on top of a very lean and arguably beautiful core. However, the additional syntax absolutely isn't required to write well structured idiomatic JavaScript code. In fact, it is an ongoing debate on whether or not it generally is a good idea to blow the language spec up as much as it did in the last years. The additional syntax can help you where it makes sense, but there's no need to start learning all that and a functional core take you very far! Therefore, if you know a LISP, you're already halfway done to becoming a good JavaScript developer! If you also know a language with a "standard" class-based hierarchy approach (Ruby, Python, Java, C#, you name it), you're even closer to writing great JavaScript code!
Clojure and JavaScript share many attributes which you don't have to re-learn:
To quickly transpose your development knowledge, you have to ask the following questions:
I'll make a stretch and say that Maps and Vectors are the most important data structures in Clojure (at least to get started). JavaScript has two data structures which are similar to those. Of course they are not persisting, so they are not exactly the same. However the other attributes are very similar.
Maps
are Objects
https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Objects
Vectors
are Arrays
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array
For a larger vocabulary, import the Lodash library
module
in JavaScript: https://hacks.mozilla.org/2018/03/es-modules-a-cartoon-deep-dive/class
: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/ClassesMany things make JavaScript special, but one of the most interesting properties is that doing I/O is never blocking! JavaScript is heavily event oriented. Handling I/O is performed via events and callbacks/promises.
Example:
setTimeout(function() {
console.log("This text appers in 2s as a log message.")
}, 2000)
Read more about the concurrency model and event orientation here: https://developer.mozilla.org/en-US/docs/Web/JavaScript/EventLoop
There are currently over 700'000 packages on NPM with about 40% year over year growth.
Of course, quantity and quality are not the same. There's lot's of churn and the 'new hotness' sometimes is quickly replaced by 'the leading edge'. However, a benefit of writing idiomatic functional code is that your code will persevere!
Learn how to program in JavaScript with equally good high-level guides and in-depth tutorials: https://developer.mozilla.org/en-US/docs/Web/JavaScript
]]>The complexity of software is growing at an exponential rate. The biggest challenge is the growing complexity of dynamic state which makes it hard to reason about a system. There are many paradigms aiming to ease the situation. To reduce incidental complexity "Functional Programming" and "Code Hot-Reloading" have become much talked about topics in the web development community.
In this talk, Alain shows how to supercharge your development setup with true code hot-reloading in a truly functional programming language.
Please find the slides and demo application here: https://gitlab.com/200ok/talks/
Picture and video credit: Aleksej Dix from Web Züri, Original Link
]]>However, for my workflow (which is based on GTD) it is important to know that I have all my pending tasks visible in one place. In fact, that's one of the great features of Org mode - I have my meeting minutes, tasks with links to resources like mails, time tracking, etc, all in one place. Having some tasks scattered in different tools is dangerous, because:
For this reason, I import tasks from proprietary tools into my local Org mode agenda. I don't do anything fancy with it like two-way sync as this complicates matters a lot. Those external tools are great at what they do (collaboration [potentially in real time], exchanging assets, etc) and there is little sense to clone that functionality in Emacs. There's more important things to do, at least^^
My flow is:
For importing tasks from Pivotal Tracker, I just open sourced my small script which you can find here: https://gitlab.com/200ok/tracker2org
It's nothing fancy, at all, but it might save you the time to write it yourself. Or you might just take the thought away that it might make sense to have a local copy of all the (potentially distributed) tasks.
]]>Im Freien entwickelt es sich am Besten!
Feinster italienischer Kafi macht gute Stimmung.
Vom Retreat Haus hat es einen wunderbaren Ausblick auf den Lago Maggiore
Von Business-Entscheidungen hin zu Architektur und Entwicklung - alle sind konzentriert an der Arbeit!
Frühaufsteher bekommen jeden Morgen ein wunderbares Spektakel geschenkt.
QuickShift ist der automatische Einsatzplaner, der den Geschäftsführer entlastet und Schichtpläne erstellt mit denen alle Mitarbeiter zufrieden sind.
Betriebe in denen es Rollen und Schichten gibt, stehen jeden Monat auf's Neue vor einem schwer lösbaren Problem: Arbeitspläne erstellen, welche die Anforderungen des Betriebes abdecken, Mitarbeiter-Wünsche berücksichtigen und gesetzlich konform sind. Das Erstellen eines Arbeitsplanes bedeutet einen sehr hohen, manuellen und repetitiven Zeitaufwand.
Mit QuickShift automatisieren wir die Erstellung von Einsatzplänen vollständig. Das entlastet die Geschäftsführung, welche die gewonnene Zeit anderweitig und sinnvoller einsetzen kann. Das Erstellen funktioniert auf Knopfdruck und berücksichtigt dabei Mitarbeiter-Wünsche, gesetzliche Regelungen sowie individuelle Bedürfnisse des Betriebes. Die Qualität der automatisch erstellten Einsatzpläne ist dabei deutlich höher als von Menschen erstellte Pläne. Das steigert die Zufriedenheit der Mitarbeiter.
Das QuickShift Team ist ideal für die Lösung dieser Herausforderung gewappnet - es besteht aus Veteranen aus dem Gastro-Bereich, hat tiefe Kenntnis über die dahinterstehenden mathematischen Probleme und hat 20 Jahre Erfahrung in der Entwicklung von Software-Lösungen.
Nach 2 Jahren Markt-Sondierung, Tests und Prototyping ist QuickShift jetzt seit einem Monat bei verschiedenen Betrieben im Einsatz, welche sehr gutes Feedback geben. Melden auch Sie sich heute an und profitieren von QuickShift!
]]>--force-device-scale-factor
. On my Macbook Pro, I'm using a
scale-factor of 1.5.
Here's a little convenience wrapper around the slack
binary to
always start Slack with this setting. The same can be done for other
Electron apps, as well.
/usr/local/bin/slack
#!/bin/sh
/usr/bin/slack --force-device-scale-factor=1.5
]]>
~/.zshrc
will recognize
project folders you changed to, so that when you create new shells
(i.e. through opening a terminal) it changes to the last used project
automatically. Please find the complete snippet at the end of this
post.
Some projects have a lot of processes. While there are tools for orchestrating the startup of applications that require multiple processes, sometimes it just more convenient to open terminals for each of those processes. But having opened multiple terminals, it would be cumbersome to have to change to the project's directory on each of those shells. And more generally, it would be nice to have a shell setup which is aware of the project I'm working on and thus could automatically change the directory to the current project as I spawn new terminals.
Let's make a list of what we need to make that happen:
A flat file will do for storage. This file storage will be required
throughout the upcoming code. Let's put it in a variable WD
(for
"working directory").
WD=~/.wd
We can easily save the current working directory:
pwd > $WD
And read it back:
CURRENT_PROJECT=`cat $WD`
But since we're planning to be able to revert to the last detected project, we'll actually use it as a stack, and thus instead of overwriting the file we'll append to it.
pwd >> $WD
And when reading from the stack instead of reading the whole file we'll just read the last line.
CURRENT_PROJECT=`tail -1 $WD`
So reading and writing to the storage is set, let's move on.
Distinguishing project from nonproject directories is a tricky one and
might depend on the tools you're using. Since I'm using git in almost
all of my projects, I settle with the presence of a .git
directory
as an indicator for a project directory.
if [[ -d .git ]]; then
# ...
fi
If you are using other VCSs, you need to change that, obviously. A
good indicator might also be project settings files that are written
by your editor or IDE or dependency/project automation files (like
Gemfile
for Ruby, package.json
for JavaScript or project.clj
for
Clojure).
cd
Hooking into changing directories is fairly easy with zsh as it
provides chpwd
among its so called
"Hook Functions".
But it is a good practice to use add-zsh-hook
, which lets you
register multiple functions to a hook.
autoload -U add-zsh-hook
add-zsh-hook chpwd recognize-project
recognize-project
is a function that we still need to write as of
yet.
Other shells than zsh provide similar functionality. In some cases
like bash you get away by wrapping the builtin cd
command in a
function, that call the builtin but also runs you own code.
cd
Automatically changing to the last location stored is as easy as calling
cd `tail -1 $WD`
Adding this to you ~/.zshrc
will run it automatically for each new
shell. Just be aware that as long as ~/.wd
is empty or doesn't exist
this will throw an error.
Putting it together:
#!/usr/bin/zsh
autoload -U add-zsh-hook
WD=~/.wd
recognize-project() {
if [[ -d .git ]]; then
pwd >> $WD
fi
}
add-zsh-hook chpwd recognize-project
cd `tail -1 $WD`
Used in practice, this quickly reveals some weaknesses.
Sometimes, while working on project A we just want to have one shell in project B to look something up, but we quickly release that the location of project B has been stored when opening the next shell and we would like to have the means of undoing that. In that case we just need to remove the last line from the storage (pop the stack), read the location before that and change to it.
previous-project() {
sed -i '$ d' $WD
cd `tail -1 $WD`
}
alias pp=previous-project
I like to give functions expressive names, but I don't want to type
these so I aliased previous-project
here to pp
.
Another weakness is that your our stack will quickly collect multiple
consecutive equal lines. This is of no much use and in fact renders
the just added undo feature useless. So to get rid of duplicate
consecutive lines in our stack we'll use some sed
magic:
sed -i '$!N; /^\(.*\)\n\1$/!P; D' $WD
This reads: If it's not the last line read the line and see if it is equal to the next line, if that is not the case print and in any case delete it. This will effectively remove duplicate consecutive lines and this keep our stack usable.
Ok, let's put everything together! This gives us the complete snippet:
#!/usr/bin/zsh
autoload -U add-zsh-hook
WD=~/.wd
recognize-project() {
if [[ -d .git ]]; then
pwd >> $WD
# delete consecutive duplicate lines
sed -i '$!N; /^\(.*\)\n\1$/!P; D' $WD
fi
}
add-zsh-hook chpwd recognize-project
previous-project() {
# delete last line
sed -i '$ d' $WD
cd `tail -1 $WD`
}
alias pp=previous-project
cd `tail -1 $WD`
]]>
You neither need to be an Emacs user nor a Clojure Programmer and neither do you need to contemplate about becoming one either to enjoy this talk. Much like you don't become a professional musician by attending a concert, but it might very well be inspiring.
This talk was recorded at the Clojure Meetup in Zurich, Switzerland.
The slides are available for download:
You can find my (literate) Emacs configuration here: https://github.com/munen/emacs.d/
]]>
Die Entwicklung von QuickShift schreitet weiter voran. Wir freuen uns, mitteilen zu können, dass wir mit dem ersten Gastro-Unternehmen den Betrieb aufnehmen! Das Grübeln über Einsatzpläne kann auch bei Ihnen bald ein Ende haben. Im Verlaufe des Jahres werden wir immer mehr Betriebe aufnehmen. Wir würden uns freuen, wenn auch Ihr Betrieb diese Gelegenheit am Schopf packt!
Gewisse Bedingungen bei der Erstellung von Einsatzplänen sind für alle Betriebe gleich. Dazu zählt beispielsweise das Einhalten der LGAV-Bedingungen. Auch versuchen die meisten Betriebe Wünsche Ihrer Angestellten zu berücksichtigen. Jeder Unternehmer hat aber auch ganz individuelle Ansprüche an seine Einsatzpläne. In QuickShift können diese Anforderungen auf einfach Art und Weise integriert werden. Auch kommen wir sehr gerne zu einem persönlichen Gespräch vorbei, um Ihre Bedürfnisse aufzunehmen und für Sie eine optimale Lösung zu finden.
Bleiben Sie mit unserem Newsletter auf dem Laufenden und profitieren Sie von 50% Rabatt im ersten Jahr!
]]>Finding bugs is somewhat like fishing with a net. At 200ok, we use fine, small nets (unit tests) to catch small fish, and big, coarse nets (integration tests) to catch the killer sharks.
We encourage you to start testing as soon as you have code.
Our mantra is: Test early. Test often. Test Automatically.
Writing End-to-End-Tests will improve the quality of your application for some simple reasons:
It's proven that using Test-Driven-Development (TDD) practices, your code will have better design and less bugs. For example, Microsoft found in a study across multiple teams and products that TDD teams produce code with a 60-90% better defect density
Having said that, writing good End-to-End-Tests (aka. Integration- or Feature-Tests) is not trivial and therefore not favored by some programmers. One reason is that compared to other tests, they are rather unwieldy. More important though, when written without the proper guideline, they will often be brittle.
End-to-End-Tests, as the name suggest, have to integrate the whole system, and thus there is probably not much we can do about unwieldy, apart from having a nice DSL to make them more concise and thus more readable.
However, even if they are rather slow, they are comparatively fast compared to do integration testing by hand. Why? For every new feature developed, there is an exponential amount of work to be done for regression testing. Let's compare the effort of manually testing an application over time (as more features are developed) with automated Integration-Tests.
With automated tests, you leave the hard, repetitive and boring work to the machine. Those kinds of jobs, machines are very good at - whereas humans get bored and make mistakes on repetitive tasks.
As you can see, there definitively is an initial overhead of writing integration tests compared to testing by hand. However, as soon as some more features are developed, the automated test suite gains quickly and overtakes the manual testing process.
Note: Consider the math of the graph above as a rule of thumb. The actual effort for both automated tests and manual tests are very different for different kinds of programs. Therefore this is nothing more than a rough sketch, not an actual scientifically proven fact for every scenario.
So, yes, compared to other kinds of automated tests, Integration-Tests are unwieldy and slow. However, compared to the only other solution (manual testing), they are very fast!
Having spoken about speed, we should speak about brittleness.
A typical End-To-End-Test visits a page in the application, simulates some user interactions, like filling in a form and clicking a button or a link, and then asserts certain facts about the resulting page.
Over the years, we found that End-To-End-Tests often break, when there is work done in the markup or even in the design. The feature might still work perfectly fine, but the tests did use some implementation detail like a name or a CSS class that isn't being used anymore - and voilà: the tests fails. This is unacceptable. It costs time and resources to fix these tests even though the code works and there is no regression. Such a test doesn't fullfill its duty as a warning system for regressions, it is a false positive. And a warning system that has too many false alarms is of no use. With too many false alarms no one will duck and cover when the real alarm sounds. So let's fix those false positives.
The following example uses Rspec (with Capybara). This is a setup in a typical Rails app, but the overall strategy to solve the discussed issues can be applied in any technology stack. Capybara is very expressive, so it'll be easy for you to transpose the knowledge to a different stack.
Let's start with a typical -- a brittle -- test.
describe 'Login' do
it 'logs the user in' do
visit '/'
fill_in 'user_email', with: 'foo@bar.com'
fill_in 'user_password', with: 'secret'
find('.login').click
expect(page).to have_selector('.dashboard')
end
end
The test goes to the root page, fills in the login form, submits it
and then asserts the existence of an element with a specific class on
the resulting page. To do that, it needs to reference elements in the
page and that's OK. However, the test is tightly coupled to
implementation details, like: The email field is named user_email
,
the password field is named user_password
, the submit button has the
class login
, and the resulting page has an element with the class
dashboard
. That makes it a brittle test!
The origin for all these names and classes are beyond the scope of an Integration-Test. The names of the form are likely coupled with the model and for the CSS classes it's likely that some CSS styles are attached to these classes for layouting purposes.
Having established that the origin of the names are beyond the scope of the test means that these names might change at any time. This leads to a situation where the test will fail and give a false positive. The feature still works, but the test fails because the feature has been implemented in a different way.
To solve this issue, it's good practice to introduce a CSS namespace
for testing. Let's prefix the existing names with test-
. In our test,
we used two CSS classes login
and dashboard
, so we introduce
test-login
and test-dashboard
. Let us call these classes
buoys.
In our markup, we add the buoy test-login
to the submit button and
the buoy test-dashboard
to the element with the class dashboard
on
the resulting page. In our test, we will replace the existing
references to CSS classes with our new namespaced classes.
For form fields it's more tricky. As the function fill_in
only
takes an id (or name), but we still want to use our buoys, we will
have to add one level of indirection. First we have to find the
element in question and then we query it for its name to use that as
the first argument to fill_in
.
input = find(:css, '.test_user_email')
fill_in input[:name], with: 'foo@bar.com'
We do this likewise for all form fields. By doing so, we gain multiple benefits: Through namespacing, we avoid naming conflicts. Using a dedicated namespace in CSS for writing tests makes the references from tests to markup and CSS explicit in both directions. Before, we could read a test and see that it referenced elements in the markup via a CSS selector or other means. But there was no way to read the markup and see that an element is of significance to a test.
Now, if we're refactoring the markup and we see a class like
test-login
on a button, we can assume that at least one test will use that class
to identify the login button, and if it gets lost during refactoring,
we expect that test to fail. Hence, it raises awareness that at least
one test will fail if removed. You don't get that out of regular CSS
classes, because naturally you think they are for attaching styles - not
tests. To a lesser degree of certainty, it additionally indicates that
an element in the markup is covered by tests.
This ultimately allows us to define certain rules, which we apply when we work with these classes.
A buoy is a CSS class that starts with test-
. In that way
buoys make up a namespace within CSS.
From your tests (or specs), only refer to buoys, and give them meaningful names. Don't use other CSS classes, nor other means of identifying elements in the markup.
Never attach any styles to buoys. buoys are for attaching tests, not styles.
When doing front-end work, like a redesign or changing markup, be careful to not lose any buoys. Make a list of the buoys you remove and put them back in when you're done.
These rules will make our tests resilient to redesigns. This means that our tests will stay intact while we change the markup or design. They will prevent you from getting false positives from your test suite where tests fail, while your feature still is working perfectly fine.
Happy testing!
]]>Apart from a few tiny things here and there. One of them is the
ability to adjust brightness. The standard way for many window managers
(WM) is to defer that functionality to xbacklight
. However, this
doesn't work on the MBP.
munen@debzen:~% xbacklight
No outputs have backlight property
xbacklight
looks for some settings in /sys/class/backlight
, but
it's not configurable under which sub-directory it looks. And for some
reason, the drivers for the MBP puts the settings in different
directory.
While it would be proper to fix this upstream, I decided on working on this while on the train and being offline - so I took a quicker route and re-implemented the basic functionality in Ruby. To be able to get this done in a couple of minutes is a testament to the transparency and ease of use of a Linux system.
Requirements:
/usr/local/bin/brightness
:
#!/usr/bin/env ruby
# coding: utf-8
# Get maximum and current brightness from `/sys` which is provided by
# the kernel
@max_brightness = `cat /sys/class/backlight/gmux_backlight/max_brightness`.to_i
@brightness = `cat /sys/class/backlight/gmux_backlight/brightness`.to_i
def brighter
@brightness = (@brightness * 1.1).to_i
# Failsafe
@brightness = @max_brightness if(@brightness > @max_brightness)
# Start with a little light
@brightness = 50 if (@brightness < 50)
end
def darker
@brightness = (@brightness * 0.9).to_i
@brightness = 0 if (@brightness < 40)
end
# Note: This needs passwordless sudo privileges
def set_brightness
`echo #{@brightness} | sudo tee /sys/class/backlight/gmux_backlight/brightness`
puts @brightness
end
def get_status
puts "💡 #{(100 * @brightness / 1023.0).to_i}%"
end
case ARGV[0]
when "darker"
darker
set_brightness
when "brighter"
brighter
set_brightness
when "status"
get_status
end
If xbacklight
doesn't work for you and the above script doesn't work
as well, then your driver might just use yet another folder to expose
the brightness controls. Check out if there's any folder in
/sys/class/backlight
- if so, that's probably the one and you can
change the above script accordingly.
This script should reside in /usr/local/bin
and have executable
permission.
The parameters are:
brighter
: Increases brightness by 10%darker
: Decreases brightness by 10%status
: Prints the brightness status in percentFirst, find the keycodes to your brightness keys on the keyboard.
Start the xev
command in a terminal, hit the brightness keys and
look for the keycodes in the verbose output.
Secondly, register the brightness
script as a shortcut. For example,
this is the configuration for the i3 window manager:
~/.i3/config
:
bindcode 232 exec "brightness darker"
bindcode 233 exec "brightness brighter"
Restart your window manager - or reload it with your config file. For
i3, the latter is bound to Mod+Shift+R
. Then test the key bindings.
Many people use onscreen displays for this. There are many existing
tools that you could employ. I want this information in the status
bar - so I augmented the i3status
status bar.
To include new information in i3status
, you have to write a wrapper
script around i3status
:
/usr/local/bin/my_i3status.sh
:
#!/bin/sh
# shell script to prepend i3status with brightness info
i3status | while :
do
read line
echo "`brightness status` | $line" || exit 1
done
To use it, change the status_command
in .i3/config
.
bar {
status_command my_i3status.sh
}
Reload your i3 config once again and enjoy the happy light light bulb^^
]]>(ns redux.reducer)
(defmulti Action
(fn [state action]
(:type action)))
(defmethod Action :default [state {:keys [type] :as action-data}]
(prn "Action of " type " not defined.")
state)
It is possible to keep the redux implementation so simple because of two features of Clojure. One of those features we are taking advantage of is multimethods, my favorite form of Clojure's runtime polymorphism. Throughout our app the only way to change the state of the redux store is by dispatching actions. You can define action types by setting :type. Use it like this:
(r/dispatch! {:type :add-todo :name "brew coffee"})
Your reducers always have to return state!
(ns redux.core
(:require-macros [cljs.core.async.macros :refer [go go-loop]])
(:require
[cljs.core.async :as a]
[reagent.core :as r]
[redux.reducer :refer [Action]]))
(defonce !state (r/atom {}))
(defonce !actions (a/chan))
(defn dispatch! [action]
(a/put! !actions action))
(go-loop []
(when-let [a (a/<! !actions)]
(swap! !state Action a)
(recur)))
The other feature we are using to concisely implement redux are channels. Channels allow us to dispatch actions asynchronously from within other actions, which takes away at least half of the pain of building single page applications.
The redux store is a ratom
, which is watched by reagent components and triggers
re-renders.
To query state in a component just require
[redux.core :as r]
and use
@r/!state
Redux can help you structuring your application. It becomes obvious to have a list of reducers and a list of components. While this is a sensible way to split state changing logic from the view in early stages of development, you might later group reducers/components by use cases.
.
├── redux
│ ├── core.cljs
│ └── reducer.cljs
└── app
├── components
│ ├── box.cljs
│ ├── button.cljs
│ ├── item.cljs
│ ├── screen.cljs
│ └── snackbar.cljs
├── core.cljs
├── reducers
│ ├── components.cljs
│ ├── validation.cljs
│ ├── form.cljs
│ └── websockets.cljs
└── utils.cljs
If you agree with Clojure's way of dealing with complexity through isolation, you will agree with me on this point.
(defmethod Action :blur [s {:keys [id evt]}]
(r/dispatch! {:type :close-warning})
(let [val (-> evt .-target .-value)]
(r/dispatch! {:type :update-comp :id id :data textt val}))
s)
While working on a reducer for an action type, you are looking at a pure 1 function of action and state. There is neither any other state nor any affected components elsewhere, you can focus on building and returning the next state.
In order to implement a redux app, start with the event handler, create empty reducers and implement reducers one by one.
1 We are dispatching an action from within the function. This is a semantically observable side effect and strictly speaking the function is not pure. Nevertheless, the side effect is applied in a controlled way to our system and reducers remaing easy to reason about. The argument of reduced cognitive load holds.
]]>Requesting the metadata is easy, it's just a GET
request to the root
of the buckets' URL: http://[bucket_name].s3.amazonaws.com/
The response will look like this:
<ListBucketResult>
<Name>your-bucket-name</Name>
<Prefix/>
<Marker/>
<MaxKeys>1000</MaxKeys>
<IsTruncated>false</IsTruncated>
<Contents>
<Key>your_filename.json</Key>
<LastModified>2018-02-24T21:33:05.000Z</LastModified>
<ETag>"da3cf2b69a251d0545fb67feb7b1e7ea"</ETag>
<Size>5649</Size>
<StorageClass>STANDARD</StorageClass>
</Contents>
</ListBucketResult>
The premise for this to work is that your bucket has to be configured
correctly - your users will need to be able to list
the bucket and
to get
objects from it. This is the appropriate AWS policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::{{ bucket_name }}",
"arn:aws:s3:::{{ bucket_name }}/*"
]
}
]
}
Furthermore your bucket needs to be configured to "host a website". This is a property that you can set in the AWS S3 console.
]]>Bescheidene Platzverhältnisse: Alles, was die Teilnehmer am «Hackathon» brauchen, ist ein Platz für den Laptop.
Betritt man die Eingangshalle der alten Spinnerei in Linthal, fühlt man sich in eine andere Zeit versetzt. Abgenutzte Holztüren, Kacheln und die Farbe Orange erinnern an die 1980er-Jahre. Die Eingangshalle ist leer, alles wirkt verlassen. Fast schon so, als habe man die Spinnerei nach ihrer Blütezeit aufgegeben. Dann rumpelt es in einem Nebenraum, und ein junger Mann mit einer schwarzen Hornbrille kommt hinaus. Er sei auch nur Gast hier. «Der Chef ist da drüben», sagt er und zeigt auf eine unscheinbare Tür. Der Mann läuft weg, dreht sich nach einigen Sekunden wieder um und schaut mit seinen grossen Augen durch die Brille: «Ich bin übrigens Stefanos», fügt er an und läuft mit einem breiten Grinsen davon.
Das Gehirn läuft einen Marathon Hinter dieser Tür verbirgt sich eine andere Welt. Sie besteht aus Laptops, vielen Kabeln,Club-Mate-Eistee, aus kalter Pizza vom Vortag, Haftnotizen und Kaffee. Und sie ist international. Rund 20Leute aus der Schweiz, aus Deutschland, Griechenland oder gar Slowenien befinden sich im Raum. Einige von ihnen haben bis 4Uhr am Laptop gearbeitet. Das Büro ist gleichzeitig Schlafsaal und Kantine. Auf dem Boden liegen Matratzen, und überall stehen Tassen, Mineralflaschen oder Knabbereien.
«Das Ziel ist es, mit den Projekten einen Mehrwert für das Glarnerland zu schaffen.»Nico Schottelius Geschäftsführer Ungleich GmbH
Nico Schottelius unterbricht sein Gespräch, zieht während dem Gehen seinen grünen Strickpullover mit Pinguinmotiv aus, und das Co-WorkingSpace-Logo Digital Glarus auf seinem T-Shirt kommt zum Vorschein. Der Geschäftsführer der Ungleich GmbH in Schwanden hat den «Hackathon», also einen Hacking-Marathon, organisiert. Er fand in der Spinnerei in Linthal statt. «Hack4Glarus» dauerte 42Stunden, weil man auch 42Kilometer für einen Marathon rennen muss.
Am Freitagabend begannen die Teilnehmer, an ihren Projekten zu arbeiten. «Das Ziel ist es, mit den Projekten einen Mehrwert für das Glarnerland zu schaffen», sagt Schottelius. Von einer WLANRichtantenne, die 15Kilometer weit sendet, über das schlüssellose Öffnen einer Haustür bis zum automatisierten Bestellsystem für Getränke und Essen: Hier werde vieles getestet. «Testen, das ist ein wichtiger Punkt», fügt Alain Lafon hinzu. Er ist Geschäftsführer der 200ok GmbH aus Zürich, die auch in Glarus einen Standort hat. Am «Hackathon» hat er als Co-Organisator mitgewirkt. «Der Begriff ‘Hack’ bedeutet, etwas zu versuchen. Zu schauen, was überhaupt möglich wäre.» Der Begriff sei mittlerweile besonders wegen der Medien negativ konnotiert.
«Uns ist es wichtig, dass man an diesem Wochenende spielerisch lernen kann. Es braucht keine abschliessende Lösung», erklärt Lafon. «Genau, das Projekt muss nicht perfekt sein. Aber das Konzept sollte gut und umsetzbar sein», fügt Schottelius hinzu. Besonders stolz sei er auf jene Teilnehmer, die ohne technisches Vorwissen mitgemacht hätten. «Sie wollten einfach etwas lernen und gehen nun mit einem Mehrwissen in die Welt hinaus.» Das nütze ihnen selbst, aber auch der Gesellschaft, da sie ihr Wissen nun in anderen Situationen einbringen könnten.
Viele können profitieren Schottelius habe sogar schon eine Kaufanfrage erhalten: «Ein Mann bemerkte beim Vorbeigehen an der Spinnerei, dass hier an etwas gearbeitet wird.» Er habe im Gespräch mit Schottelius die WLAN-Richtantenne, die sogenannte Wi-Fi Bridge, entdeckt und gefragt, ob er sie auch bei sich zu Hause nutzen könne. Sein Haus sei nämlich etwas höher gelegen, und er habe deshalb einen schlechten Internetempfang. Der Organisator freut sich darüber: «Das war genau unser Ziel, der Bevölkerung mit unseren Tests einen Nutzen bringen.»
Es gebe viele weitere Möglichkeiten, um das Leben der Glarnerinnen und Glarner zu erleichtern, erzählt Schottelius. Er wird ganz hibbelig und rennt davon. Zurück kommt der Technikfan mit zwei Kisten. «Ich bin so froh, dass es dieses Produkt auch endlich in der Schweiz gibt.» Es handelt sich um ein elektronisches Türschloss. Damit lässt sich die Haustür alleine durch das Handy in der Hosentasche öffnen, per Bluetooth oder Wi-Fi. Das sei nicht nur bequem, wenn man die Hände gerade nicht frei habe. Die Menschen würden immer älter, und für sie könne das ein riesiger Dienst sein, erklärt er und nennt ein Beispiel: «Man stelle sich eine betagte Person vor. Eine, die auf einen Rollator angewiesen ist.» Diese Person brauche nicht mehr nach dem Hausschlüssel zu suchen und habe die Hände frei, um sich festzuhalten. Alain Lafon fügt hinzu: «Für Unternehmen kann dieses System auch sehr praktisch sein.» Die Tür lasse sich auch mit einem Timer öffnen. Den könne man sogar im Ausland einrichten und so Gästen oder temporären Mitarbeitern, die keinen Schlüssel hätten, Eintritt zum Gebäude gewähren.
Die Umwelt ist auch ein Thema Dass die Digitalisierung auch ihre Schattenseiten hat, weiss Schottelius aber auch. Beim Thema Energieeffizienz wird er gar aufbrausend. «Das Thema beschäftigt mich sehr», erklärt er. «Rechenzentren sind üblicherweise riesige Stromfresser. Unser Rechenzentrum hier in der Spinnerei ist aber dreimal effizienter als die meisten auf der Welt.» Das habe zwei Gründe: Einerseits wird es wegen der vorhandenen Infrastruktur der Spinnerei zu 100Prozent mit Wasserkraft betrieben. Zudem stehen die Rechner neben- statt aufeinander. «Weil wir hier viel Platz haben, können wir das machen und sparen somit enorm viel Energie, die sonst für die Kühlung nötig wäre», sagt er.
«Uns ist es wichtig, dass man an diesem Wochenende spielerisch lernen kann.»Alain Lafon Geschäftsführer 200ok GmbH
Elektrosmog betreffe Digital Glarus ebenfalls kaum. «Da im Glarnerland ein gut ausgebautes Glasfaserkabelnetz vorhanden ist, sind wir nicht auf Wi-Fi-Router angewiesen.» Aber auch wenn es so wäre, deren Strahlung sei nicht mit der eines Smartphones zu vergleichen. «Das Funksignal eines Handys ist dreibis viermal stärker als das eines Wi-FiRouters», erklärt Schottelius.
Während er über Elektrosmog spricht, surrt es leise im Hintergrund. Es ist Stefanos, der Mann mit der Hornbrille. Der Grieche läuft im Raum hin und her und testet das elektronische Türschloss. Wenn der Test gelingt, braucht man im Glarnerland womöglich bald keine Schlüssel mehr.
]]>Crowdfunding using established platforms can be very expensive. Usually, there is a fee of 12-20% that has to be paid. Swiss Crowdfunder is fully open source - therefore it can be used by anyone to start their own crowdfunding platform or campaign.
Swiss Crowdfunder is released under the popular AGPL license, has a contribution guide as well as a code of conduct, is fully configurable and has a community Mattermost chatroom.
For more information, visit https://github.com/200ok-ungleich/swiss-crowdfunder, star the project and become part of it!
Next to the OSS version, we run and support our hosted SaaS version at https://swiss-crowdfunder.com/. If you don't want to get too technical, get in touch with us at info@200ok.ch and we will host your campaign for a fee of 8%.
We also offer coaching and support in how to set up a proper crowdfunding campaign and optimize your marketing efforts. Call or write us anytime!
]]>This event is hosted at a great location (same as the launch of swiss-crowdfunder.com and is hosted by our friends from ungleich glarus ag.
We from 200ok will be there as well - as are our friends from the Insopor Zen Academy. Join us for three great days of hacking, fun and learning!
Join the event now on the dedicated page, on Facebook or on Meetup.
What will be the hack topics?
Anything that is related to Glarus, that can improve lives of people. Below are some fun hack examples. Of course the topics will not be limited to the list below, you can come up with your own project!
Where does Hack4Glarus take place?
It will happen at Spinnerei Linthal, a very cool old factory hall at Linthal. It's also where the new data center from ungleich, Data Center Light is.
What will be provided at Hack4Glarus?
What do I need to bring to Hack4Glarus?
And there’s an extra: Fridolinpass.
If you have an extraordinarily good idea, apply for Fridolinpass! Write us why you should participate Hack4Glarus, and how will the community benefit from your hacking. For the ones who win Fridonlinpass we will cover her/his travel cost within Switzerland to Hack4Glarus.
How can I apply?
Apply here and submit your ideas: https://docs.google.com/forms/d/e/1FAIpQLSezSIJwO5gcvwrAhxHjsqwXM72eiU7j627olhgWKGDHHIaoWQ/viewform?usp=sf_link
Attention: only limited number of seats are available. Apply now!
]]>Swiss Crowdfunder is a joint product that we built together with our good friends from ungleich glarus ag. As the name implies it is a crowdfunding platform, but with a twist! Whereas traditional crowdfunding campaign supporters will only receive certain goodies (like stickers, t-shirts or even newly created products), supporters on Swiss Crowdfunder can actually invest in companies.
Equity funding (or crowdinvestment) enables the supporters to effectively buy shares from an otherwise not publicly traded company. Therefore it is a great option to raise money for companies on the one hand, but on the other hand it is an even greater option for the general public to invest into startups and vested companies alike.
There's already a juicy campaign online, too! ungleich glarus ag is expanding their Data Center Light to another location. The new location is in Linthal, GL (Switzerland) and is perfectly situated within great infrastructure - the new location even has its own water power plant, so the whole data center is going 100% green! Apart from that, they are very serious about open source - their whole tech stack runs on open source software.
Their plans work out well so far. To accelerate their expansion plans, they are raising CHF250,000 and are giving everyone the option to buy shares.
On Saturday, we had a great launch event at the new location. I'll leave you with some impressions.
Countdown to launch
Alain enjoys the launch event, but keeps on coding!
Live music
Rob Moir made a fantastic concert - unplugged, just him and his guitar. It was amazing!
Friendly get together of like-minded people
]]>A data-driven (using a self developed OSS filesystem DB as source) static site. The site is built in Clojure and OSS itself.
We're still working on the styling and adding more data, but we like to release early and often^^
]]>You can merge tiles by merging the whole board either horizontally or vertically. If you manage to get the 2048 tile, you win. After a merge (= player's turn), a tile will appear randomly on an empty slot. The chance of that tile being a 4 is 0.1, the chance of it being a 2 is 0.9. After reaching the 2048 you can keep playing and the same rules apply.
In fact we are going to write a bot that is able to get the 8192 tile.
First of all, we need a measure of how well the bot performs. If you want to come up with your own heuristic, pause here and play a few rounds.
After a few rounds, you probably realize that the largest tile should stay in a corner. Intuitively the larger tiles should stick together.
We can formalize these observations by splitting up the heursitic score into two parts:
The cluster score is simply the actual board weighted by a matrix:
(def matrix [[15 14 13 12] [11 10 9 8] [7 6 5 4] [3 2 1 0]])
(defn cluster-score [board matrix]
(reduce +
(for [x [0 1 2 3]
y [0 1 2 3]]
(* (nth (nth board x) y) (nth (nth matrix x) y)))))
This is a measure of how monotone the board is. It leads to the highest tile sticking to the upper-left corner.
This score is actually a penalty. The higher this score, the worse for the player. It is calculated by summing the differences of all tiles to all their adjacent neighbours.
(defn neighbour-score
[x y board]
(reduce +
[(Math/abs (- (nth (nth board x) y) (nth (nth board (max (dec x) 0)) y)))
(Math/abs (- (nth (nth board x) y) (nth (nth board (min (inc x) 3)) y)))
(Math/abs (- (nth (nth board x) y) (nth (nth board x) (max (dec y) 0))))
(Math/abs (- (nth (nth board x) y) (nth (nth board x) (min (inc y) 3))))]))
(defn hetero-score
[board]
(reduce +
(for [x [0 1 2 3]
y [0 1 2 3]]
(neighbour-score x y board))))
Finally, substract the penalty score from the cluster score:
(defn score
"heuristic score for a given board"
[board]
(- (cluster-score board matrix) (hetero-score board)))
And we are left with a function that maps a game board to a score. We are going to implement an algorithm that tries to maximize this score.
The next part consists of a function to simulate moves. A move is either a player move, or a move by the environment (= spawning tiles). This distinction is important as we will see later on. The function we are looking for has following signature:
(defn execute-move
"returns board after execution of move"
[board move]
next-board)
The first observation is, considering only horizontal moves, that each row merges independently. Our second observation reveals, that vertical moves can easily be transformed to horizontal moves by transposing the board. The last observation shows, that a left-merge equals a right-merge of the reversed vector.
If we solve merging of a single row along a single axis, we solve simulating player moves:
(def moves {:up 0 :down 1 :left 2 :right 3})
(defn remove-zeroes
[row]
(vec (filter (complement zero?) row)))
(defn pad-zeroes
"right pads zeroes to length 4"
[row]
(loop [row row]
(if (>= (count row) 4)
row
(recur (conj row 0)))))
(defn merge-pair
"merges two elemnts of a row to the left, considering the original row"
[row a b original]
(if (and (= (nth row a) (nth row b))
(or (nil? original) (= (nth original 2) (nth row 2))))
(-> row
(assoc a (+ (nth row a) (nth row b)))
(assoc b 0))
row))
(defn merge-row-left
[row]
(-> row
(remove-zeroes)
(pad-zeroes)
(merge-pair 0 1 nil)
(merge-pair 2 3 nil)
(merge-pair 1 2 row)
(remove-zeroes)
(pad-zeroes)))
(def m-left (memoize merge-row-left))
It is noteworthy, that we memoize the function merge-row-left
. Assuming the maximum tile we want to reach is 8192 (= 2^13), there are only 13^4 possible combinations to make up a row. This function will potentially be called millions of times per second while searching for the score maximizing player move.
Introducing some transpose functions leads to our goal function execute-move
:
(defn- merge-row-right
[row]
(-> row
(reverse)
(m-left)
(#(vec (reverse %)))))
(def m-right (memoize merge-row-right))
(defn- merge-row
[move]
(if (= move :right)
m-right
m-left))
(defn- merge-rows
[board move]
(map (merge-row move) board))
(defn- transpose-move
[move]
(cond
(= move :down) :right
(= move :up) :left))
(defn- transpose
[board]
(apply mapv vector board))
(defn execute-move
[board move]
(if (> 2 (get moves move))
(transpose (merge-rows (transpose board) (transpose-move move)))
(merge-rows board move)))
We are using a search algorithm with an adaptive depth of search. While searching the bot alternates between the chance layer and the max layer. The chance layer is where the environment spawns a tile randomly. We don't know where it's going to happen and we don't know what tile it's going to be: We have to calculate using the expectancy value of all possible boards:
(defn- average
[numbers]
(/ (apply + numbers) (count numbers)))
(defn- all-spawns
"returns a list of boards by spawning tiles of `kind` on all free slots"
[board kind]
(->>
(for [x [0 1 2 3]
y [0 1 2 3]]
(if (= (nth (nth board x) y) 0)
(assoc-in (vec board) [x y] kind)))
(filter (complement nil?))))
(defn calculate-chance
"returns heuristic score of current chance node"
([board depth limit original]
(if (= board original)
0
(calculate-chance board depth limit)))
([board depth limit]
(if (or (= depth limit))
(ai/m-score board)
(average (concat
(map #(* (calculate-max % (inc depth) limit) 0.9) (all-spawns board 2))
(map #(* (calculate-max % (inc depth) limit) 0.1) (all-spawns board 4)))))))
At the max layer on the other hand, we are in control. We can simply execute all possible moves given a board and return the highest heuristic score using calculate-max
.
(defn- all-moves
"returns a list of all possible moves given a board"
[board]
(filter #(not= % board) (map #(game/execute-move board %) ai/moves)))
(defn calculate-max
"returns heuristic score of current max node by returning the max value of the children"
([board depth limit original]
(if (= board original)
0
(calculate-max board depth limit)))
([board depth limit]
(if (or (= depth limit) )
(ai/m-score board)
(apply max (concat (map #(calculate-chance % (inc depth) limit) (all-moves board)) '(0))))))
Lastly, we expose our magnificent AI through a single function best-move
returning the best move given a board:
(defn-n count-zeroes
"amount of empty slots on a board"
[board]
(or (-> board
(flatten)
(frequencies)
(get 0)) 0))
(defn-n decide-depth
"set depth of search according to amount of empty slots left"
[number]
(cond
(> number 12) 1
(> number 7) 2
(> number 4) 3
(> number 1) 4
(>= number 0) 6
:else 2))
(defn- get-depth
"returns depth of search for current board"
[board]
(-> board
(count-zeroes)
(decide-depth)))
(defn best-move
"returns best move for a board"
[board]
(let [moveh (sort-by val > (into (sorted-map)
(pmap
(fn [x] {x (ex/calculate-chance (game/execute-move board x) 0 (get-depth board) board)}) moves)))]
(get moves-map
(first (keys moveh)))))
(def m-best-move (memoize best-move))
Using pmap
I am able to get 100% CPU usage on Java 8 HotSpot VM and a dual core machine. At most there are only 4 functions getting executed in parallel, so the performance gain through parallelization is probably not that great on machines with more cores.
You can find the source in this repo.
]]>As an added bonus, we have the best working conditions we could ever ask for.
Come and join us any time for a coffee, a tea or a coworking session in our offices in Glarus or Schwanden.
]]>
Don't get me wrong, I'm not talking about a file system database like sqlite here. I'm using the term database loosly. Think of a bunch of config files that make up a database that you want to query. That's what fsdb can do for you. You point it to a directory and it reads the data from the directory tree and returns a data structure that you can query.
Ok enough talks, here is the example from the README...
% tree example
example
├── people.edn
├── technologies.edn
└── technologies
└── clojure.yml
$ cat example/people.edn
{:rich {:name "Rich Hickey"}}
$ cat example/technologies.edn
{:clojure {:year "unknown"}}
$ cat example/technologies/clojure.yml
---
year: 2007
Reading this structure with fsdb/read-tree
will result in the
following data structure:
{:example
{:people {:rich {:name "Rich Hickey"}}
:technologies {:clojure {:year 2007}}}}
In the example you can observe multiple aspects of fsdb
.
So "Having the data your way" means you get to decide when and where to split up your data file into subdirectories and smaller files, which makes it easier to keep track of your data.
In addition to structuring, there are many more reasons you might want to split a database into multiple files.
If your files are under Version Control, splitting them up means, reducing potential merge conflicts and you can even choose to exempt some files from being tracked.
You can also mix and match formats. Some data might be easier to edit in YAML than EDN, or the other way around. (Other formats can easily be supported. Drop me a line or send a Merge Request for the formats you might need.)
Maybe some of your files are generated by surrounding automation,
fsdb
will help you mix different sources into one queryable data
structure.
Please find fsdb on Gitlab and Clojars:
]]>$EDITOR
used. If you're using Debian, it's contained in
the moretools
package (which has other helpful utilities).
However, if you're using Emacs as your editor, there's no need for
vidir. In fact, there's some extra work involved to open a shell,
change to the directory in which you're already working and then use
vidir. Besides, Emacs comes with a built-in mode that already has
the same functionality. The well known
dired-mode,
has a lesser known option to actually edit the dired
buffer. This mode
can be toggled with M-x dired-toggle-read-only
(or C-x C-q
). Then
you make your edits to the files (rename, move, delete) and commit
them using the familiar C-c C-c
.
Here's the official documentation
on editing dired
buffers and here's a short video demonstration of
me using writable dired
buffers.
On such occasions, your *nix shell yields better tooling!
First, fire up a Terminal and start nload
to view network traffic.
If you are on macOS, you can also use the graphical tool activity monitor
.
Then, in a second Terminal, start a permanent upload or download with the following commands:
ssh lafo@dublin.zhaw.ch "cat /dev/urandom" > /dev/null
cat /dev/urandom| ssh lafo@dublin.zhaw.ch "cat > /dev/null"
Note that depending on your ssh
configuration, you might need to
disable compression on the client or server side.
chromium --force-device-scale-factor=1.5
yields the same UI size that I was used to before the upgrade.
Looking at the Chromium upstream bug tracker, this issue is solved since end of May. It'll find it's way to Debian, eventually. In the meantime the workaround works just as well.
The exact package version where this behaviour has been observed is: 60.0.3112.78-1
]]>tramp-mode
is great for editing files remotely, but sometimes having
a shell and Emacs together on the same file can be invaluable.
eshell
opens up a shell which is like a regular Unix shell, but is
written completely in Elisp, so it's built-in to Emacs and is completely
portable. eshell
has many interesting properties, but let's focus on
editing files remotely.
When in eshell
, it is possible to change the working directory into
a remote directory with the same syntax as tramp-mode
. Yes, no
manual ssh-ing to the remote machine, it's more like a fuse-sshfs
connection, but without fuse and without the manual mounting!
Changing to a remote directory is trivial:
~ $ cd /ssh:root@your-host.io:
/ssh:root@your-host.io:/root $
From there, you can continue to use the shell, but you can also start to
edit any file with dired
, C-x C-f
or find-file
.
Apart from that it feels pretty much like a mount point to which you can copy files from and to and so forth.
It's magic!
]]>A common scenario for looking into S3 is to want to list files ordered
by date and including metadata. On a Unix machine, this would be an ls -lt
.
If it's a very long list of files, you might want to cap the list - which
again is very easy to achieve, for example with ls -lt | head -n 10
.
Those two things are easy on a Unix machine, but not so straight
forward on Amazon S3. Amazon does have an ls
command which will list
all files within [BUCKET]
:
aws s3 ls s3://[BUCKET]
It is totally feasible to pipe the result of this command into head -n X
. It does have serious drawbacks, though, because you might have
a lot of files on S3:
Next to aws s3
, which only implements the most high level features
such as cp
and mv
, there is a more powerful API
aws s3api
.
With this API you can write queries and limit the amount of objects
returned. Queries can include the LastModified
timestamp. Limiting
on this timestamp is not the same as ordering by time (as in ls -lt
), but it's the closest you will get to this functionality on S3.
aws s3api list-objects \
--bucket "[BUCKET]" \
--query 'Contents[?LastModified>=`2017-04-10`][].{Key: Key, Size:
Size, LastModified: LastModified}'
]]>
There's also a reason why extensions are disabled by default. Apparently Chromium started downloading binary extensions that don't show in the extensions list which had access to the Google Voice API which sounds kinda scary. Well, maybe it is, maybe it isn't.
However, if you decide you can't live without extensions in Chromium, this is how you fix it:
Set this global config flag in /etc/environment
:
export CHROMIUM_FLAGS='--enable-remote-extensions'
More background information on why this is the new default:
]]>If you are using a modern VCS like git or mercurial, then committing and branching is actually very cheap. With those, it is actually best practice to commit early and often. If you don't want to commit "until you are done", then your branch will only ever have one commit, because branches already are the barrier between different new features (see A successful Git branching model). Instead, commit as early as you have finished part of a new feature like a test or an architecture stub.
This will open up a treasure trove of potential good things for future you and other programmers:
Also, do push regularly whenever you have made a commit. This will also yield great potential:
In conclusion, the coders tip: Make it a habit to commit and push whatever you have at the end of the day. Even if you forgot to do it during the day, by making it a habit to commit and push at the end of the day, you will get a lot of the benefits.
]]>(DSL stands for Domain Specific Language and which is a programming language that is closely modelled after the domain it is used in. Wikipedia has a good introductory article.)
Obie Fernandez wrote the reference on Rails. In this podcast he speaks about what a DSL is, the difference between internal and external DSLs as well as the importance of the flexibly syntax of the host language in order to make DSLs worthwhile.
http://podbay.fm/show/120906714/e/1175932332
Martin Fowler is a well known expert in Software Engineering specializing in patterns. In this podcast he talks with Rebecca Parsons about the definition of DSL, Internal vs. External DSLs, reasons to use DSLs and reasons not to and the DSL lifecycle.
Last but not least, there is a Ruby library which let's you map a DSL to your Ruby objects in a snap.
]]>(defmacro defsample [name & args]
(let [options (apply hash-map args)]
`(defn ~name [input#]
(prn (str (:text ~options) " " input# ".")))))
(defsample sample :text "Hello")
(sample "world")
Keep on reading for a detailed description. In my quest to come up with the prototype above I started with the most basic version of a macro that generates a function.
;; working, but boring
(defmacro defsample [name]
`(defn ~name []
(prn "Hello World.")))
(defsample sample)
(sample)
The key elements of this macro being of course the Syntax Quote
`
and the Syntax Unquote ~
.
With a little modification the macro will take options and provide these to the generated function:
;; working, but still somewhat boring
(defmacro defsample [name & args]
(let [options (apply hash-map args)]
`(defn ~name []
(prn (:text ~options)))))
(defsample example :text "Hello world.")
(example)
Now, if we also want to pass in arguments to the generated function, the straight forward attempt will not work:
;; defunkt!
(defmacro defsample [name]
`(defn ~name [input]
(prn input)))
(defsample sample)
(sample "Hello world.")
The reason that this does not work is, that in the example above the
Syntax Quote `
will fully qualify all symbols within.
But since qualified symbols cannot be used in the params of a defn
this will result in a
CompilerException java.lang.RuntimeException: Can't use qualified name as parameter
So let's replace the Syntax Quote with a regular Quote '
then...
;; also defunkt!
(defmacro defsample [name]
'(defn ~name [input]
(prn input)))
(defsample sample)
(sample "Hello world.")
But, as you might have guessed, using a regular Quote instead of a Syntax Quote doesn't help, because the Syntax Unquote will not work properly and thus result in a
CompilerException java.lang.IllegalArgumentException: First argument to defn must be a symbol
So instead of using a given symbol, which will be fully qualified by
the Syntax Quote, we will have to use a generated symbol. This is
anyway a good practice since it will prevent accidental naming
collisions. In addition to the function gen-sym
which will generate
such a generated symbol, there is also a short hand in form of a
reader macro, that let's us mark a symbol as to be replaced with a
generated symbol. It works by appending a #
to the symbol.
;; working and almost there...
(defmacro defsample [name]
`(defn ~name [input#]
(prn input#)))
(defsample sample)
(sample "Hello world.")
Now, combining this with the macro that took options and we finally arrive at the prototype that you saw in the first paragraph.
]]>cd
command. You could argue that the Ruby community are no strangers to
guerrilla patching. But it felt less awkward when I learned that in
my zsh it uses hooks instead of guerrilla patching
to achieve the same goal: React on changing the directory.
In the meantime I tried a couple of things with zsh hooks to optimize work flows and what not, but as it turns out, everything boiled down to one hook, and one hook only:
autoload -U add-zsh-hook
# source .sourceme files
load-local-conf() {
# check file exists, is regular file and is readable:
if [[ -f .sourceme && -r .sourceme ]]; then
source .sourceme
fi
}
add-zsh-hook chpwd load-local-conf
The basic idea is: Directories naturally act like contexts, projects are
structured in directories, the different concerns within a project
could be structured by directories, and so on. If you store contextual
functions, aliases, variables, or documentation - whatever you need on
the console in the given context - in .sourceme
files, the hook above
will automatically source the file once you enter that directory and
you will have everything in there available.
Here is a (rather contrived, but never the less pretty clear) example:
Let's say you have two projects that have images in a images
directory. As new images come in you find yourself repeatedly call
ImageMagick's mogrify
command to reduce the images to a given size
and each of the projects requires a specific image size. In the
project's .sourceme
you could define an alias
alias resize-images='mogrify --geometry 100x images/*'
There is your first contextual helper.
Other examples are:
docker ps
to make sure the required containers are running
as I start working on a project.cat <<EOF
to display some project specific commands that I
have trouble to remember.Here is how it works.
The other day I wished I had a visualization of the dependencies in a piece of ClojureScript code that over the course of the year has gotten a bit unwieldy. I did some thinking and some codeing and it turns out it's quite easy. Here are some of the highlights in code & images, but mostly code.
Reading a string and have it evaluates in Clojure is done with
read-string
. Reading a string and have it not evaluated is as
easy as wrapping the string in an additional set of square brackets.
(read-string (str "[" (slurp path) "]")))
This returns the content of the file given by path
as a Clojure data
structure. In less homoiconic languages you would probably call this
an Abstract Syntax Tree. Homoiconicity for the win!
The data structure is a tree that we want to walk to find dependencies
in the code. Here clojure.walk/prewalk
comes in handy. We'll pass it
a function which will be called for every node in the tree. I went
with a multimethod since I initially expected that different typo of
nodes would require different action, but it turns out the only thing
we're actually interested in are symbols.
If the symbol is member of a given set, namely def
, defn
, defn-
,
defmulti
, and defmethod
, we will pay special attention to the next
symbol, because it will be the dependant of the upcoming dependencies
that we will record. Every subsequent occurrence of a symbol will then
constitute a dependency.
From there the only thing left to do is to limit the dependencies to those where the dependee is also the dependant of another dependency; and render the remainder in dot syntax to be visualized by graphviz.
Check out the code for more details.
]]>I tried a couple and just when I was about to make the decision that I want to try them all to find the best one my search was cut short when I tried jq. On the web site it claims: jq is like sed for JSON data. And come to think of it what would we do without sed?
And yes, it's that good. Here two easy examples:
Extracting a field
Let's say you access an API to retrieve an access key. Then that's what it probably looks like.
curl -s -X PUT --header 'Content-Type: application/json' \
--header 'Accept: application/json' \
-d '{
"email": "your@email.address",
"password": "swaggerrocks"
}' \
'https://your.api.endpoint/'
This of course will return a whole Json object.
{"access":"1234567890987654321", "some":"more", "fields":"you don't care about"}
But you might only be interested in the access key. Pipe it through jq.
curl -s -X PUT --header 'Content-Type: application/json' \
--header 'Accept: application/json' \
-d '{
"email": "your@email.address",
"password": "swaggerrocks"
}' \
'https://your.api.endpoint/' | jq -r .access
This will print
1234567890987654321
(Which in my setup is then piped into xclip -i
to copy it to X's
primary clipboard.) The -r
switch gives you the raw output instead
of surrounding the resulting string with quotes.
jq comes with its own mini language to define "filters" (more like transformers) to manipulate Json or extract data out of it.
Since I now have a tool at hand to work with Json on the command line nicely. It suddenly bothers me that I've to pass carefully crafted literal Json into the curl earlier. Hence in a second example I will show how to use jq to generate Json.
Generate Json
To generate Json we tell jq with the -n
switch to expect no
input. We'll then use filters to add our data.
jq -n '.email="your@email.address"|.password="swaggerrocks"'
Using this technique we can refactor our command from earlier to
jq -n '.email="your@email.address"|.password="swaggerrocks"' \
| curl -s -X PUT -d @- \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
'https://your.api.endpoint/' \
| jq -r .access
I find some beauty in that.
]]>Half those steps (3, 4 and 6) are manual and very repetitive if you want to have an incremental development experience. They can be automated completely.
There are some frameworks that support Module hot-loading by now. ClojureScript and Elm probably were the first languages to support that paradigm, but it's possible in JavaScript as well, by now.
This is a demo showing you how to incrementally build a super simple RiotJS application using module hot-loading. I never had to go back to the browser and hit 'reload'. To be able to do this, I used the RiotJS-Loader.
For the code of this demo, please see this repository.
]]>Introducing one of my favorite tricks: Changing the background color of the terminal while running an ssh session.
function ssh() {
dbus-send --session /net/sf/roxterm/Options net.sf.roxterm.Options.SetColourScheme string:$ROXTERM_ID string:Tango
/usr/bin/ssh $@
dbus-send --session /net/sf/roxterm/Options net.sf.roxterm.Options.SetColourScheme string:$ROXTERM_ID string:Default
}
In my setup I'm using zsh and roxterm, but I'm sure it'll work for other tools as well if you adjust it to yours.
]]><table>
, you might have stumbled upon something that looks like a
bug. If you nest Custom Tags within a <table>
element, they will not
render within said element, but outside of it. Let me clarify with an
example and how to fix it.
If you have code that nests elements like this:
<todoList>
<table>
<tbody>
<todo each="{ allTodos() }">
</todo>
</tbody>
</table>
</todoList>
Where <todo>
is defined as a totally legit table row, you will find
that the result does not look like you imagined it. If you check your
browsers developer tools, you will see that the browser renders your
<todo>
elements unexpectedly outside the <table>
element. It might
look like this:
<todolist>
<todo></todo>
<table>
</table>
</todolist>
The reason for this phenomenon is that browsers require known HTML
elements within a <table>
and <tbody>
elements. If anything else
is issued to be rendered within, it is just ejected.
No worries, though! RiotJS has a proper way of handling this type of
situation. Standard HTML elements can be used as Riot Tags when you
use the data-is
attribute. For the <todoList>
example above, this is
functional code:
<todoList>
<table>
<tbody >
<tr each="{ allTodos() }" data-is="todo">
</tr>
</tbody>
</table>
</todoList>
In this example, the <tr>
tag is a legit HTML tag which the browser
expects to be within a <table>
. However, since we want to write
modular composable code, we tell RiotJS to actually use the <todo>
component to compose the <tr>
. For completeness, this would be valid
code for a <todo>
component:
<todo>
<td>
<input type="checkbox" checked={ done } />
{ name }
</td>
</todo>
One last thing: A custom tag has to be lower-case only when used with
data-is
, otherwise Riot will not pick it up. This is likely a bug
and will be fixed in a future version.
You can use templates for different types of capture items, and for different target locations.
The following code sets up three capture templates – for todos, media urls and code snippets (lines 4-7).
%?
sets the exit point for the template, %^g
prompts for a tag,
%^{language}
prompts for the language of the snippet and the remainder
is boilerplate to create an org-mode entry (*) and an org-mode snippet
(#+BEGIN_SRC\n\n#+END_SRC
).
http://orgmode.org/manual/Template-expansion.html#Template-expansion
(setq org-capture-templates
'(("t" "Todo" entry (file+headline (concat org-directory "inbox.org") "Tasks")
"* TODO %?\n %U\n %i\n %a")
("s" "Code Snippet" entry
(file (concat org-directory "snippets.org"))
;; Prompt for tag and language
"* %?\t%^g\n#+BEGIN_SRC %^{language}\n\n#+END_SRC")
("m" "Media" entry
(file+datetree (concat org-directory "media.org"))
"* %?\nURL: \nEntered on %U\n")))
]]>
Spoiler: This post is primarily gonna be an excerpt of my bookmarks collection. That’s because more intelligent men than me have already written great articles on the topic of how to become a great Python programmer.
I will focus on four primary topics: Functional programming, performance, testing and code guidelines. When those four aspects merge in one programmer, he or she will gain greatness no matter what.
Writing code in an imperative style has become the de facto standard. Imperative programs consist of statements that describe change of state. While this might sometimes be a performant way of coding, it sometimes isn’t (for example for sake of complexity) – also, it probably is not the most intuitive way when compared with declarative programming.
If you don’t know what I’m talking about, that’s great. Here are some starter articles to get your mind running. But beware, it’s a little like the red pill – once you tasted functional programming, you don’t want to go back.
There’s so much talk going on about how inefficient these ‘scripting languages’ (Python, Ruby, …) are, that it’s easy to forget that very often it’s the algorithm chosen by the programmer that leads to horrible runtime behaviour.
Those articles are a great place to get a feel for the ins and outs of Python’s runtime behaviour, so you can get your high performing application writting in a language that is concise and fun to write. And if your manager asks about Python’s performance, don’t forget to mention that the second largest search engine in the world is run by Python – namely Youtube(see Python quotes).
Testing is probably one the most misjudged topics in computer science these days. Some programmers really got it and emphasize TDD(test driven development) and it’s successor BDD(behaviour driven development) whereever possible. Others simply don’t feel it yet and think it’s a waste of time. Well, I’m gonna be that guy and tell you: If you haven’t started out on TDD/BDD yet, you have missed out greatly!
It’s not about introducing a technology to replace that release management automaton in your company that mindlessly clicks through the application once in a while, it is about giving you a tool to deeply understand your own problem domain – to really conquer, manipulate and twist it the way you want and need it to be. If you haven’t yet, give it a shot. These articles will give you some impulses:
Not all code is created equal. Some can be read and changed by any great programmer out there. But some can only be read and only sometimes changed by the original author – and that maybe only a couple of hours after he or she wrote it. Why is that? Because of missing test coverage (see above) and the lack of proper usage of coding guidelines.
These articles establish an absolute minimum to adhere to. When you follow these, you will write more consise and beautiful code. As a side effect it will be more readable and adaptable by you or anyone else.
Now go ahead and spread the word. Start with the guy sitting right next to you. Maybe you can go to the next hackathlon or code dojo and start becoming great proficient programmers together!
All the best on your journey.
If you liked this article, please feel free to re-tweet it and let others know.
]]>hexlify()
and unhexlify()
. Since he asked for it, I’m going to share my answer
publicly with you.
First of all, I’m defining the used nomenclature:
- ASCII characters are being written in single quotes
- decimal numbers are of the type Long with a L suffix
- hex values have a x prefix
First, let me quote the documentation:
binascii.b2a_hex(data) binascii.hexlify(data) Return the hexadecimal representation of the binary data. Every byte of data is converted into the corresponding 2-digit hex representation. The resulting string is therefore twice as long as the length of data.
binascii.a2b_hex(hexstr) binascii.unhexlify(hexstr) Return the binary data represented by the hexadecimal string hexstr. This function is the inverse of b2a_hex(). hexstr must contain an even number of hexadecimal digits (which can be upper or lower case), otherwise a TypeError is raised.
I’ll begin with hexlify()
. As the documentation states, this method
splits a string which consists of hex-tuples into distinct bytes.
The ASCII character ‘A’ has 65L as numerical representation. To verify this in Python:
long(ord('A'))
65L
You might ask “Why is this even relevant to understand binascii?” Well, we don’t know anything about how ord() does its job. But with binascii we can re-calculate manually and verify.
binascii.hexlify('A')
'41'
Now we know that an ‘A’ – interpreted as binary data and shown in hex
– resembles ’41’. But wait, ’41’ is a string and no hex value! That’s
no biggy, hexlify()
represents its result as string.
To stay with the example, let’s convert 41 into a decimal number and check if it equals 65L.
long('41', 16)
65L
Tada! It seems that ‘A’ = 41 = 65L. You might have known that already, but please, stay with me a minute longer.
To make it look a little more complex:
binascii.hexlify('A') == '%x' % long('41', 16)
True
Be aware that '%x' % n
converts a decimal number n
into its hex
representation.
binascii.unhexlify()
naturally does the same thing as hexlify()
,
but in reverse. It takes binary data and displays it in tuples of
hex-values.
I’ll start off with an example:
binascii.unhexlify('41')
'A'
binascii.unhexlify('%x' % ord('A'))
'A'
Here, unhexlify()
takes the numerical representation 65L from the
ASCII character ‘A’
ord('A')
65
converts it into hex 41
'%x' % ord('A')
'41'
and represents it as a 1-tuple (meaning dimension of one) of hex values.
And now the conclusio – why might all of this be useful? Right now, I can think of at least four use cases:
Taking up the last example, I’ll show you how to visualize the Bell
escape sequence (you know, that thing that keeps beeping in your
terminal). Taken from the ASCII table, the numerical representation of
the Bell is 7. Programmers might know it better as \a
.
ord('\a') == 7
True
Presuming you read such a character in some kind of binary data – for
example from a socket and you want to visualize this data with
print
, you will not get any results – at least none visible. You
might hear the Bell sound if you’re not on a silent terminal.
Now, finally – binascii to the rescue:
binascii.hexlify('\a')
'07'
Voilà, the dubious string is decrypted.
]]>Finding the perfect IDE for Python isn’t an easy feat. There are a great many to chose from, but even though some of them offer really nifty features, I can’t help myself but feel attracted to VIM anyway. I feel that no IDE accomplishes the task of giving the comfort of complete power over the code – something is always missing out. This is why I always come back to using IDLE and VIM. Those two seem to be best companions when doing some quick and agile hacking – but when it comes to managing bigger and longer term projects, this combo needs some tweaking. But when it’s done, VIM will be a powerful IDE for Python – including code completion(with pydoc display), graphical debugging, task-management and a project view.
This is where we are going:
So, these are my thoughts on a VIM setup for coding (Python).
Modern GUI VIM implementations like GVIM or MacVIM give the user the opportunity to organize their open files in tabs. This might look convenient, but to me it is rather bad practice, because a second tab will not be in the in the same buffer scope as the first one which takes away from future interaction options between the two. Using MiniBufExplorer, however, gives the user tabs(not only in the GUI, but also in command line) and leaves the classic buffer interaction intact.
Being able to neatly work on multiple files, the user still misses the potential his favourite IDE gives him in visualizing classes, functions and variables. Luckily there are quite a few plugins around to accomplish this task just as well. My favourite one would be TagList. TagList uses Exuberant Ctags for actually generating the tags(note: it really relies on this specific version of ctags – preinstalled implementations on UNIX systems won’t work).
A lot of coders have the habit of using TODO or FIXME statements in their code. Other IDEs often rely on having good third party project management software, but not VIM. There are great plugins like Tasklist reminding the programmer of those lines of code. Tasklist even implements custom lists – to me that’s an incredible productivity gain.
In these times, the programmer knows his or her programming language more or less by interactively finding out what it can do. Therefore code completion(sometimes also called IntelliSenseugh) is a major feature. I have heard many people saying that this is where VIM fails – but luckily they are plain wrong(; In V7, VIM introduced omni completion – given it is configured to recognize Python (if not, this feature is only a plugin away) Ctrl+x Ctrl+o opens a drop down dialog like any other IDE – even the whole Pydoc gets to be displayed in a split window.
Probably the most wanted feature(besides code completion) is debugging graphically. VimPDB is a plugin that lets you do just that(. I acknowledge it is no complete substitution for a full fledged graphical debugger, but I honour the thought that having to rely on a debugger (often), is a hint of bad design.
From the eye-candy to the implementation. Don’t worry, it’s no sorcery.
First of all, make sure you have VIM version >=7.x installed, compiled with Python support. To check for the second, enter :python print “hello, world” into VIM. If you see an error message like “E319: Sorry, the command is not available in this version”, then it’s time to get a new one. If you’re on a Mac, just install MacVIM(there’s also a binary for the console in /Applications/MacVim.app/Contents/MacOS/). If you’re on Windows, GVIM will suffice(for versions != 2.4 search for the right plugin). If you’re on any other machine, you will probably know how to compile your very own VIM with Python support.
Second, check if you have a plugin directory. In Unix it would typically be located in $HOME/.vim/plugin, in Windows in the Program Files directory. If it doesn’t exist, create it.
In this Blog Post, I'll show you how to manually install these plugins. Of course, there is other options like using the wonderful Pathogen of Tim Pope, or using the VIM8 plugin methodology.
Now, let’s start with the MiniBufExplorer. Get it and copy it into your plugin directory. To start it automatically when needed and be able to use it with keyboard and mouse commands, append these lines in your vimrc configuration:
let g:miniBufExplMapWindowNavVim = 1
let g:miniBufExplMapWindowNavArrows = 1
let g:miniBufExplMapCTabSwitchBufs = 1
let g:miniBufExplModSelTarget = 1
For a project view, get TagList and Exuberant Ctags. To install Ctags, unpack it, go into the directory and do a compile/install via:
./configure && sudo make install
Ctags will then be installed in /usr/local/bin
. When using a Windows
machine, I recommend Cygwin with GCC and Make;
it’ll work just fine. If you don’t want to tamper with your original
ctags installation, you can propagate the location to VIM by appending
the following line to vimrc:
let $Tlist_Ctags_Cmd='/usr/local/bin/ctags'
To install TagList, just drop it into VIMs plugin directory. You will
now be able to use the project view by typing the command
:TlistToggle
.
Tasklist is a simple plugin, too. Copying it into the plugin directory will suffice. I like to have shortcuts and have added
map T :TaskList<CR>
map P :TlistToggle<CR>
to vimrc. Pressing T
will then open the TaskList if there are any
tasks to process. q
quits the TaskList again.
VimPDB is a plugin, as well. Install as before and see the readme for documentation.
To enable code(omni) completion, add this line to your vimrc:
autocmd FileType python set omnifunc=pythoncomplete#Complete
If it doesn’t work then, you’ll need this plugin.
My last two recommondations are setting these lines to comply to PEP 8(Pythons’ style guide) and to have decent eye candy:
set expandtab
set textwidth=79
set tabstop=8
set softtabstop=4
set shiftwidth=4
set autoindent
:syntax on
There are certainly a lot more flags to help productivity, but those will probably be more user specific.
Have fun coding Python while not being bound to a specific IDE, but having all the benefits of VIM bundled with a few helping hands. Enjoy, everyone.
]]>In this short tutorial, I’m going to show you how to scrape a website with the 3rd party html-parsing module BeautifulSoup in a practical example. We will search the wonderful translation engine dict.cc, which holds the key to over 700k translations from English to German and vice versa. Note that BeautifulSoup is licensed just like Python while dict.cc allows for external searching.
First, place BeautifulSoup.py in your modules directory. Alternatively, if you just want to do a quick test, put in the same directory where you will be writing your program. Then start your favourite text editor/Python IDE(for quick prototyping like we are about to do, I highly recommend a combination of IDLE and VIM) and begin coding. In this tutorial we won’t be doing any design; we won’t even encapsulate in a class. How to do that, later on, is up to your needs.
What we will do:
All required code is embedded in this post. At the bottom, you will find the complete code in one snippet.
Now, let the magic begin. Those are the required imports.
import urllib
import urllib2
import string
import sys
from BeautifulSoup import BeautifulSoup
urllib and urllib2 are both modules offering the possibility to read data from various URLs; they will be needed to open the connection and retrieve the website. BeautifulSoup is, as mentioned, a html parser.
Since we are going to fetch our data from a website, we have to behave like a browser. That’s why will be needing to fake a user agent. For our program, I chose to push the webstatistics a little in favour of Firefox and Solaris.
user_agent = 'Mozilla/5 (Solaris 10) Gecko'
headers = { 'User-Agent' : user_agent }
Now let’s take a look at the code of dict.cc. We need to know how the form is constructed if we want to query it.
...
<form style="margin:0px" action="http://www.dict.cc/" method="get">
<table>
<tr>
<td>
<input id="sinp" maxlength="100" name="s" size="25" type="text" />
style="padding:2px;width:340px" value="">
...</td>
</tr>
</table>
</form>
...
The relevant parts are action
, method
and the name
inside the
input tag. The action is the web application that will get called when
the form is submitted. The method shows us how we need to encode the
data for the form while the name
is our query
variable.
values = {'s' : sys.argv[1] }
data = urllib.urlencode(values)
request = urllib2.Request("http://www.dict.cc/", data, headers)
response = urllib2.urlopen(request)
Here the data get’s encapsulated in a
GET
request and packed into the form. Notice that values
is a dictionary
which makes handling more complex forms a charm. The form gets
submitted by urlopen()
– i.e. we virtually pressed the
“Search”-button. See how easy it is? These are only a couple lines of
code, but we already have searched on dict.cc for a completely
arbitrary word from the command line. The response
has also been
retrieved. All that is left, is to extract the relevant information.
the_page = response.read()
pool = BeautifulSoup(the_page)
The response
is read and saved into regular html code. This code
could now be analyzed via regular string.find()
or re.findall()
methods, but this implies hard-coding in reference to a lot of the
underlying logic of the page. Besides, it would require a lot reverse
engineering of the positional parameters, setting up several
potentially recursive methods. This would ultimately produce ugly(i.e.
not very pythonic) code. Lucky for us, there already is a full fledged
html parser which allows us to ask just about any generic question.
Let’s take a look at the resulting html code, first. If you are not
yet familar with the tool that can be seen in the screenshot; I’m
using Firefox with the
Firebug addon.
This one is very helpful if you ever need to debug a website.
Let me show an excerpt of the code.
<table>..
<td class="td7nl" style="background-color: rgb(233, 233, 233);">
<a href="/englisch-deutsch/web.html">
<b>web</b>
</a>
</td>
<td class="td7nl" ... /td>
</table>..
The results are displayed in a table. The two interesting columns
share the class td7nl
. The most efficient way would seem to just sweep
all the data from inside the cells of these two columns. Fortunately
for us, BeautifulSoup implemented just that feature.
results = pool.findAll('td', attrs={'class' : 'td7nl'})
source = ''
translations = []
for result in results:
word = ''
for tmp in result.findAll(text=True):
word = word + " " + unicode(tmp).encode("utf-8")
if source == '':
source = word
else:
translations.append((source, word))
for translation in translations:
print "%s => %s" % (translation[0], translation[1])
results
will be a BeautifulSoup.ResultSet
. Each member of the tuple is
the HTML code of one column of the class td7nl
. Notice that you can
access each element like you would expect in a tuple.
result.findAll(text=True)
will return each embedded textual element of
the table. All we have to do is merge the different tags together.
source
and word
are temporary variables that will hold one translation
in each iteration. Each translation will be saved as a pair(list)
inside the translations
tuple.
Finally we iterate over the found translations and write them to the screen.
$ python webscraping_demo.py
kinky {adj} => 9 kraus [Haar]
kinky {adj} => nappy {adj} [Am.]
kinky {adj} => 6 kraus [Haar]
kinky {adj} => crinkly {adj}
kinky {adj} => kraus
kinky {adj} => curly {adj}
kinky {adj} => kraus
kinky {adj} => frizzily {adv}
In a regular application those results would need a little lexing, of course. The most important thing, however, is that we just wrote a translation wrapper onto a web application – in only 28 lines of code.
import urllib
import urllib2
import string
import sys
from BeautifulSoup import BeautifulSoup
user_agent = 'Mozilla/5 (Solaris 10) Gecko'
headers = { 'User-Agent' : user_agent }
values = {'s' : sys.argv[1] }
data = urllib.urlencode(values)
request = urllib2.Request("http://www.dict.cc/", data, headers)
response = urllib2.urlopen(request)
the_page = response.read()
pool = BeautifulSoup(the_page)
results = pool.findAll('td', attrs={'class' : 'td7nl'})
source = ''
translations = []
for result in results:
word = ''
for tmp in result.findAll(text=True):
word = word + " " + unicode(tmp).encode("utf-8")
if source == '':
source = word
else:
translations.append((source, word))
for translation in translations:
print "%s => %s" % (translation[0], translation[1])
All that is left is for me to recommend the BeautifulSoup documentation. What we did here really didn’t cover what this module is capable of.
I wish you all the best.
]]>