Search
Items tagged with: JSON
Diaspora* Data Migration and Archival Lessons Learned
(So far)
This is a summary of my discoveries and learning over the past two months or so concerning Diaspora* data archives and references as well as JSON and tools for manipulating it, specifically
jq
.It is a condensation of conversation mostly at my earlier Data Migration Tips & Questions (2022-1-10) thread, though also scattered elsewhere. I strongly recommend you review that thread and address general questions there.
Discussion here should focus on the specific information provided, any additions or corrections, and questions on how to access/use specific tools. E.g., how to get #jq running on Microsoft Windows, which I don't have specific experience with.
Archival Philosophy
I'm neither a maximalist nor minimalist when it comes to content archival. What I believe is that people should be offered the tools and choices they need to achieve their desired goal. Where preservation is preferred and causes minimal harm, it's often desirable. Not everything needs to be preserved, but too it isn't necessary to burn down every library one encounters as one journeys through life.
In particular, I'm seeking to preserve access for myself and others to previous conversations and discussions, and to content that's been shared and linked elsewhere. Several of my own posts have been submissions to Hacker News and other sites, for example, and archival at, say, the Internet Archive or Archive Today will preserve at least some access.
This viewpoint seems not to be shared by key members of the Diaspora* dev team and some pod administrators. As such, I'll note that their own actions and views reduce choice and agency amongst members of the Diaspora* community. The attitude is particularly incongruous given Diaspora*'s innate reliance on federation and content propagation according to the original specified intent of the content's authors and creators. This is hardly the first time Diaspora* devs have put their own concerns far above those of members of the Diaspora* community.
Information here is provided for those who seek to preserve content from their own profiles on Diaspora* servers likely to go offline, in the interest of maximising options and achieving desired goals. If this isn't your concern or goal, you may safely ignore what follows.
Prerequisites
The discussion here largely addresses working with a downloaded copy of Diaspora* profile data in JSON format.
It presumes you have jq installed on your system, and have a Bash or equivalent command-line / scripting environment. Most modern computers can offer jq though you will have to install it: natively on Linux, any of the BSDs, MacOS (via Homebrew), Windows (via Cygwin or WSL), and Android (via Termux). iOS is the only mass-market exception, and even there you might get lucky using iSH.
Create your archive by visiting your Pod's /user/edit page and requesting EXPORT DATA at the bottom of that page.
If you have issues doing so, please contact your Pod admin or other support contact(s). Known problems for some Joindiaspora members in creating archives are being worked on.
## Diaspora* post URLs can be reconstructed from the post GUID
The Diaspora* data extract does not include a canonical URL, but you can create one easily:
Post URL = /posts/
So for the GUID
64cc4c1076e5013a7342005056264835
We can tack on:
- protocol:
https://
- host_name:
pluspora.com
Substitute your intended Pod's hostname here. - the string literal
/posts/
https://pluspora.com/posts/64cc4c1076e5013a7342005056264835
... which is the URL for a post by @Rhysy (
rhysy@pluspora.com
) in which I'd initially witten the comment this post is based on, at that post's Pluspora Pod origin.Given that Pluspora is slated to go offline a few weeks from now, Future Readers may wish to refer to an archived copy here:
https://archive.ph/Y8mar
Once you have the URL, you can start doing interesting things with it.
Links based on other Pod URLs can be created
Using our previous example, links for the post on, e.g., diasp.org, diaspora.glasswings.com, diasp.eu, etc., can be generated by substituting for
host_name
:- https://diasp.org/posts/64cc4c1076e5013a7342005056264835
- https://diasp.eu/posts/64cc4c1076e5013a7342005056264835
- https://diaspora.glasswings.com/posts/64cc4c1076e5013a7342005056264835
You can trigger federation by specifically mentioning a user at that instance and having them request the page.
I'm not sure of when specifically federation occurs --- when the notification is generated, when the notification is viewed, or when the post itself is viewed. I've experienced such unfederated posts (404s) often as I've updated, federated, and archived my own earlier content from Joindiaspora to Glasswings. If federation occurs at some time after initial publication and comments the post URL and content should resolve, but comments made prior to that federation will not propagate.
(Pinging a profile you control on another pod is of course an excellent way to federate posts to that pod.)
Once a post is federated to a set of hosts it will be reachable at those hosts. If it has not yet been federated, you'll receive a "404" page, usually stating "These are not the kittens you're looking for. Move along." on Diaspora* instances.
(I'm not aware of other ways to trigger federation, if anyone knows of methods, please advise in comments.)
Note that comments shown on a post will vary by Pod, when and how it was Federated, and any blocks or networking issues between other Pods from which comments have been made. Not all instances necessarily show the same content, inconsistencies do occur.
Links to archival tools can be created by prepending their URLs to the appropriate link
- Archive.Today: [url=https://archive.is/https://pluspora.com/posts/64cc4c1076e5013a7342005056264835]https://archive.is/https://pluspora.com/posts/64cc4c1076e5013a7342005056264835[/url]
- Internet Archive: [url=https://web.archive.org/*/https://pluspora.com/posts/64cc4c1076e5013a7342005056264835]https://web.archive.org/*/https://pluspora.com/posts/64cc4c1076e5013a7342005056264835[/url]
Note that the Internet Archive does not include comments, though Archive.Today does, see: https://archive.is/almMw vs. [url=https://web.archive.org/web/20220224213824/https://pluspora.com/posts/64cc4c1076e5013a7342005056264835]https://web.archive.org/web/20220224213824/https://pluspora.com/posts/64cc4c1076e5013a7342005056264835[/url]
To include later comments, additional archival requests will have to be submitted.
My Archive-Index script does all of the above
See My current jq project: create a Diaspora post-abstracter.
https://diaspora.glasswings.com/posts/ed03bc1063a0013a2ccc448a5b29e257
That still has a few rough edges, but works to create an archive index which can be edited down to size. There's a fair bit of "scaffolding" in the direct output.
Note that the OLD and NEW hosts in the script specify Joindiaspora and Glasswings specifically. You'll want to adapt these to YOUR OWN old and newPod hostnames.
The script produces output which (after editing out superflous elements) looks like this in raw form:
## 2012
### May
**Hey everyone, I'm #NewHere. I'm interested in #debian and #linux, among other things. Thanks for the invite, Atanas Entchev!**
> Yet another G+ refuge. ...
<[url=https://diaspora.glasswings.com/posts/cc046b1e71fb043d>]https://diaspora.glasswings.com/posts/cc046b1e71fb043d>[/url]
[Original]([url=https://joindiaspora.com/posts/cc046b1e71fb043d]https://joindiaspora.com/posts/cc046b1e71fb043d[/url]) :: [Wayback Machine]([url=https://web.archive.org/]https://web.archive.org/[/url]*/[url=https://joindiaspora.com/posts/cc046b1e71fb043d]https://joindiaspora.com/posts/cc046b1e71fb043d[/url]) :: [Archive.Today]([url=https://archive.is/https://joindiaspora.com/posts/cc046b1e71fb043d]https://archive.is/https://joindiaspora.com/posts/cc046b1e71fb043d[/url])
(2012-05-17 20:33)
----
**Does anyone have the #opscodechef wiki book as an ePub? Only available formats are online/web, or PDF (which sucks). I'm becoming a rapid fan of the #epub format having found a good reader for Android and others for Debian/Ubuntu.**
> Related: strategies for syncing libraries across Android and desktop/laptop devices. ...
<[url=https://diaspora.glasswings.com/posts/e76c078ba0544ad9>]https://diaspora.glasswings.com/posts/e76c078ba0544ad9>[/url]
[Original]([url=https://joindiaspora.com/posts/e76c078ba0544ad9]https://joindiaspora.com/posts/e76c078ba0544ad9[/url]) :: [Wayback Machine]([url=https://web.archive.org/]https://web.archive.org/[/url]*/[url=https://joindiaspora.com/posts/e76c078ba0544ad9]https://joindiaspora.com/posts/e76c078ba0544ad9[/url]) :: [Archive.Today]([url=https://archive.is/https://joindiaspora.com/posts/e76c078ba0544ad9]https://archive.is/https://joindiaspora.com/posts/e76c078ba0544ad9[/url])
(2012-05-17 21:29)
----
Which renders as:
2012
May
Hey everyone, I'm #NewHere. I'm interested in #debian and #linux, among other things. Thanks for the invite, Atanas Entchev!
Yet another G+ refuge. ...
https://diaspora.glasswings.com/posts/cc046b1e71fb043d
Original :: [url=https://web.archive.org/*/https://joindiaspora.com/posts/cc046b1e71fb043d]Wayback Machine[/url] :: [url=https://archive.is/https://joindiaspora.com/posts/cc046b1e71fb043d]Archive.Today[/url]
(2012-05-17 20:33)
Does anyone have the #opscodechef wiki book as an ePub? Only available formats are online/web, or PDF (which sucks). I'm becoming a rapid fan of the #epub format having found a good reader for Android and others for Debian/Ubuntu.Related: strategies for syncing libraries across Android and desktop/laptop devices. ...
https://diaspora.glasswings.com/posts/e76c078ba0544ad9
Original :: [url=https://web.archive.org/*/https://joindiaspora.com/posts/e76c078ba0544ad9]Wayback Machine[/url] :: [url=https://archive.is/https://joindiaspora.com/posts/e76c078ba0544ad9]Archive.Today[/url]
(2012-05-17 21:29)
I've been posting those in fragmenents by year as private posts to myself to facilitate both federation and archival of the content. In chunks as Diaspora* has a 2^16^ / 65,536 byte per-post size limit. It's a slow slog but I've only one more year (2021) to manually process at this point, with post counts numbering up to 535 per year.
The Internet Archive Wayback Machine (at Archive.org) accepts scripted archival requests
If you submit a URL in the form of
https://web.archive.org/save/<URL>
, the Wayback Machine will attempt to archive that URL.This can be scripted for an unattended backup request if you can generate the set of URLs you want to save.
Using our previous example, the URL would be:
[url=https://web.archive.org/save/https://pluspora.com/posts/64cc4c1076e5013a7342005056264835]https://web.archive.org/save/https://pluspora.com/posts/64cc4c1076e5013a7342005056264835[/url]
Clicking that link will generate an archive request.
(IA limit how frequently such a request will be processed.)
Joindiaspora podmins discourage this practice. Among the more reasonable concerns raised is system load.
I suggest that if you do automate archival requests, as I have done, you set a rate-limit or sleep timer on your script. A request every few seconds should be viable. As a Bash "one-liner" reading from the file
DIASPORA_EXTRACT.json.gz
(change to match your own archive file), which logs progress to the timestamped file run-log
with a YYYYMMDD-hms format, e.g., run-log.20220224-222158
:time zcat DIASPORA_EXTRACT.json.gz |
jq -r '.user .posts[] | "[url=https://joindiaspora.com/posts/]https://joindiaspora.com/posts/[/url]\(.entity_data .guid )"' |
xargs -P4 -n1 -t -r ~/bin/archive-url |
tee run-log.$(date +%Y%m%d-%H%M%S)
archive-url
is a Bash shell script:\#!/bin/bash
url=${1}
echo -e "Archiving ${url} ... "
lynx -dump -nolist -width=1024 "[url=https://web.archive.org/save/$]https://web.archive.org/save/$[/url]{url}" |
sed -ne '/[Ss]aving page now/,/^$/{/./s/^[ ]*//p;}' |
grep 'Saving page now'
sleep 4
Note that this waits 4 seconds between requests (
sleep 4
), which limits itself to a maximum of 900 requests per hour. There is NO error detection and you should confirm that posts you think you archived actually are archived. (We can discuss methods for this in comments, I'm still working on how to achieve this.)The script could be improved to only process public posts, something I need to look into. Submitting private posts won't result in their archival, but it's additional time and load.
There is no automated submission mechanism for Archive.Today of which I'm aware.
Appending .json
to the end of a Diaspora* URL provides the raw JSON data for that post:
https://joindiaspora.com/posts/64cc4c1076e5013a7342005056264835.json
That can be further manipulated with tools, e.g., to extract original post or comment Markdown text, or other information. Using
jq
is useful for this as described in other posts under the #jq hashtag generally.Notably:
- Finding most frequent specific engagement peers
- Finding your most-engaged peers
- Extract the last (or other specified) comment(s) on a post
- Create a Diaspora archive-index
- "unjsonify-diaspora --- extract the original Markdown of a Diaspora* post. Note that this can be simplified to
jq -Mr '.text'
, excluding thesed
component (see comments on post).
As always: This is my best understanding
There are likely errors and omissions. Much of the behaviour and structure described is inferred. Corrections and additions are welcomed.
#DiasporaMigration #Migration #Diaspora #Help #Tips #JoindiasporaCom #jq #json #DataArchves #Archives
unjsonify-diaspora: A shell script for extracting post Markdown source
I've needed to do this often enough, and gotten it wrong sufficiently many times, that I've finally written a simple shell script using
jq
and sed
, to return straight Markdown from a Diaspora post. It mostly makes re-editing typos easier.\#!/bin/sh
jq -M .text | sed 's/^"//; s/"$//; s/\\r\\n/\n/g; s/\\"/"/g'
Saved as
unjsonify-diaspora
.This extracts the '.text' element, trims leading and trailing quotes, replaces
\r\n
sequences with linefeeds, and unescapes quotes. Some further substitutions may be required though so far it seems good."But how do you get the JSON" you ask? Simple: append
.json
to any Diaspora URL.For example: https://joindiaspora.com/posts/c7415d50e97f01385366002590d8e506
Becomes: https://joindiaspora.com/posts/c7415d50e97f01385366002590d8e506.json
You can either copy/paste the rendered content to the script, or (preferable) use a web uttility such as
curl
or wget
to pipe directly to the script:curl --silent [url=https://joindiaspora.com/posts/c7415d50e97f01385366002590d8e506.json]https://joindiaspora.com/posts/c7415d50e97f01385366002590d8e506.json[/url] |
unjsonify-diaspora
Usually I'm doing this to fix typos, so you could send this on to an editor:
curl --silent <url> | unjsonify-diaspora | vim -
Then delete your old, buggy post and replace it with a new buggy one.
#json #jq #ShellScripts #JustLinuxThings #linux #diaspoa #tips

Gulf Coast faces fourth hurricane threat of 2020 as Tropical Storm ...
Gulf Coast faces fourth hurricane threat of 2020 as Tropical Storm Delta forms (https://www.washingtonpost.com/weather/2020/10/05/tropical-storm-delta-gulf-coast/) The record-setting 2020 Atlantic hurricane season is cranking out tempests like a factory, and its latest creation poses yet another threat to the storm-ravaged northern Gulf Coast. The National Hurricane Center declared that Tropical Storm Delta had formed Monday morning in the western Caribbean. And since then, it has quickly gained strength. Delta is forecast to intensify into a hurricane by Tuesday before lumbering toward the zone between coastal Louisiana and the Florida Panhandle from Thursday into Friday. ... I'd noticed this system via Nullschool's Earth weather visualiser (https://earth.nullschool.net/#2020/10/06/2100Z/wind/surface/level/overlay=mean_sea_level_pressure/orthographic=-89.21,20.21,1834/loc=-86.941,22.086) yesterday. The development looks potentially interesting, with two systems merging. Storm tra...joindiaspora.com
Useful Diaspora* jq
recipes: Extract the last (or other specified) comment(s) on a post
if you want to re-write the most recent comment on a thread, and retrieve the original Markdown, you can fetch the post using its GUID URL with
.json
appended, then run it through a simple jq
recipe:curl -s '<post_url>.json' |
jq -r '.interactions.comments[-1].text'
Say, for example, if you realise you'd just muffed your most recent contribution to a thread and wanted to rewrite it, but don't want to have to re-tag all the Markdown from scratch.
I'm using
curl
as the web transport and piping output to jq
on a commandline. This is supported on Linux, the BSDs, MacOS, Windows (using Cygwin or WSL), and via Termux on Android. Sorry, iOS, here's a dime, go buy a real computer.An example referencing a specific post:
curl -s '[url=https://diaspora.glasswings.com/posts/ed03bc1063a0013a2ccc448a5b29e257.json]https://diaspora.glasswings.com/posts/ed03bc1063a0013a2ccc448a5b29e257.json[/url]' |
jq -r '.interactions.comments[-1].text'
Note that comments are indexed from either the start (beginning with [0]) or via negative values, the last ([-1] instance), and you can provide another offset if you're trying to access a specific post some number of values from the start or end. The third comment would be [2], the fourth most recent would be [-4].
Leave the iterator unspecified to select all comments:
jq -r '.interactions.comments[].text`
As before, the
-r
argument outputs raw text without JSON escaping of quotes and other characters. This avoids most post-retrieval processing, e.g., a sed
script to remove quoted characters and the like.And as I've mentioned previously, jq itself is an extraordinarily useful, if occasionally opaque, command-line tool for processing and parsing JSON data. Which happens to be how Diaspora* delivers much of its content.
This post is of course based on a comment I'd made to this earlier jq thread ... as a comment. Edited and adapted, but substantially similar.
https://diaspora.glasswings.com/posts/ed03bc1063a0013a2ccc448a5b29e257
#UsefulJqRecipes #jq #json #diaspora #tips
Useful Diaspora* jq
recipes: Finding your most-engaged peers
One question I've had on Diaspora* is who it is I'm interacting with most often.
This is an extract to a Linux / Unix pipeline which will show the most frequent users within the "others_data.relayables" data.
jq -r '.others_data.relayables[].entity_data.author' |
sort | uniq -c | sort -k1nr | cat -n | less
For those unfamiliar with scripting: after extractng the data, I'm passing it through a set of Linux utilities. "sort | uniq" is something of an idiom for tallying frequencies.
These indicate any 'like', 'comment', or 'poll' interactions, but not reshares, best I can tell. For the record, the summary:
$ jq -r '.others_data.relayables[].entity_type' archive.json |
sort | uniq -c | sort -k1nr | cat -n
1 16319 like
2 7979 comment
3 67 poll_participation
I will allow that the profile most frequently interacting ... was something of a surprise.
I feel I can share my top 20 hosts / pods:
1 15937 pluspora.com
2 2802 diasp.org
3 2019 diaspora.glasswings.com
4 996 joindiaspora.com
5 666 social.isurf.ca
6 361 diasp.eu
7 206 diasporing.ch
8 104 framasphere.org
9 99 pod.geraspora.de
10 92 hub.libranet.de
11 64 diaspora.psyco.fr
12 64 diaspora.ty-penguin.org.uk
13 64 hey.iseeamess.com
14 48 social.c-r-t.tk
15 44 diaspora-fr.org
16 42 nerdpol.ch
17 40 pod.orkz.net
18 39 diaspora.permutationsofchaos.com
19 31 protagio.social
20 30 societas.online
Some expected results and a few surprises there. Note that Pluspora provides an overwhelming amount of traffic, and Joindiaspora, despite its 300k members and 10 year history ranks only 4th overall. The recently-departed social.isurf.ca hit a rather storied number and ranked 5th overall.
And as I've mentioned previously, jq itself is an extraordinarily useful, if occasionally opaque, command-line tool for processing and parsing JSON data. Which happens to be how Diaspora* delivers much of its content.
#jq #json #Diaspora #UsefulJqRecipes #tips
My current jq project: create a Diaspora post-abstracter
Given the lack of a search utility on Diaspora*, my evolved strategy has been to create an index or curation of posts, generally with a short summary consisting of the title, a brief summary (usually the first paragraph), the date, and the URL.
I'd like to group these by time segment, say, by month, quarter, or year (probably quarter/year).
And as I'm writing this, I'm thinking that it might be handy to indicate some measure of interactions --- comments, reshares, likes, etc.
My tools for developing this would be my Diaspora* profile data extract, and
jq
, the JSON query tool.It's possible to do some basic extraction and conversion pretty easily. Going from there to a more polished output is ... more complicated.
A typical original post might look like this, (excluding the
subscribed_pods_uris
array):{
"entity_type": "status_message",
"entity_data": {
"author": "dredmorbius@joindiaspora.com",
"guid": "cc046b1e71fb043d",
"created_at": "2012-05-17T19:33:50Z",
"public": true,
"text": "Hey everyone, I'm #NewHere. I'm interested in #debian and #linux, among other things. Thanks for the invite, Atanas Entchev!\r\n\r\nYet another G+ refuge.",
"photos": []
}
}
Key points here are:
entity_type
: Values "status_message" or "reshare".author
: This is the user_id of the author, yours truly (in this case in my DiasporaCom incarnation).guid
: Can be used to construct a URL in the form ofhttps://<hostname>/posts/<guid>
created_at
: The original posting date, in UTC ("Zulu" time).public
: Status, valuestrue
,false
. Also apparently missing in a significant number of posts.text
: The post text itself.
{
"entity_type": "reshare",
"entity_data": {
"author": "dredmorbius@joindiaspora.com",
"guid": "5bfac2041ff20567",
"created_at": "2013-12-15T12:45:08Z",
"root_author": "willhill@joindiaspora.com",
"root_guid": "53e457fd80e73bca"
}
}
Again, excluding the
.subscribed_pods_uris
. In most cases, reshares are of less interest than direc posts.Interestingly, I've a pretty even split between posts and reshares (52%
status_message
, that is, post).My theory in creating an abstract is:
- Automation is good.
- It's easier to peel stuff off an automatically-created abstract than to add bits back in manually.
- The compilation should contain only public posts and exclude reshares.
- It's relatively easy to create a basic extract:
jq '.user.posts[].entity_data | .author, .guid, .created_at, text
Adding in selection and formatting logic gets ... more complicated.
Among other factors,
jq
is a very quirky language.Desired Output Format
I would like to produce output which renders something like this for any given posts:
Diaspora Tips: Pods, Hashtags & Following
For the many Google Plus refugees showing up on Diaspora and Pluspora, some pointers: ...https://diaspora.glasswings.com/posts/a53ac360ae53013611b60218b786018b (2018-10-10 00:45)
What if any options are there for running Federated social networking tools on or through #OpenWRT or related router systems on a single-user or household basis?
I'm trying to coordinate and gather information for #googleplus (and other) users looking to migrate to Fediverse platforms, and I'm aware that OpenWRT, #Turris (I have a #TurrisOmnia), and several other router platforms can run services, mostly #NextCloud that I'm aware. ...https://diaspora.glasswings.com/posts/91f54380af58013612800218b786018b (2018-10-11 07:52)
The original posts can of course be viewed at the URLs shown.
What this is doing is:
- Extracting the first line of the post text itself.
- Stripping all formatting from it.
- Bolding the result by surrounding it in
**
Markdown. - Including the second paragraph, terminating it in an elipsis
...
. - Including a generated URL, based on the GUID, and here parked on Glasswings. (I might also create links to Archive.Today and Archive.Org of the original content.)
- Including the post date, with time in YYYY-MM-DD hh:mm resolution.
Specific questions / challenges:
- How to conditionally export only public posts.
- How to conditionally export only
status_message
(that is, original) posts, rather than reshares. - How to create lagged "oldYear" and "oldMonth" variables.
- How to conditionally output content when computed Month and Year values > oldMonth and oldYear respectively. Goal is to create
## .year
and### .month
segments in output. - How to output up to two paragraphs, where posts may consist of fewer than two separate text lines, and lines may be separated by multiple or only single linefeeds
\r\n
. - Collect and output hashtags used in the post.
- Include counts of comments, reshares, likes, etc. I'm not even sure this is included in the JSON output.
And of course, if I have to invoke other tools for part of the formatting, that's an option, though an all-in-jq solution would be handy.
#jq #json #diaspora #scripting #linux

Diaspora Tips: Pods, Hashtags & Following For the many Google Plus ...
Diaspora Tips: Pods, Hashtags & Following For the many Google Plus refugees showing up on Diaspora and Pluspora, some pointers: Pluspora is an pod, or server, on the Diaspora social network. It is one of many places you can follow and interact with other Diaspora users. It is not necessary for both, or either you or those you're following to be on the same server. And if you want to, you can run your own instance off very modest hardware, more on this later. Diaspora itself is only one network that's part of the larger set of Federated social networks, including GNU Social, Mastodon, Hubzilla, and more. The main distinguishing characteristic is the post length and formatting -- there are longer-form posting platforms similar to Facebook or Google, or shorter-form closer to Twitter. And no, I'm not fully up to date on what all the options are right now either, I'm actively exploring this. Diaspora and the Fediverse have hashtags. I've attached "#googleplus" to this post, which...diaspora.glasswings.com
Webentwickler:in mit Schwerpunkt Frontend in Voll- oder Teilzeit für taz.de ab sofort gesucht
Die #taz war die erste online lesbare #Tageszeitung Deutschlands. Sie bietet nach wie vor alltäglich die Möglichkeit Dinge anders zu machen und ist immer noch #Konzern-unabhängig.
Willst Du mit uns die zunehmend digitale #Zukunft des #Journalismus gestalten? Wir bieten ein kooperatives #Umfeld, das Raum für #Weiterentwicklung und #Kreativität lässt, aber auch strategisches #Denken erfordert und die Bereitschaft, alltägliche Probleme auch eigenverantwortlich zu lösen.
Wir suchen zeitnah ein:e Kolleg:in mit praktischer Berufserfahrung in der Webentwicklung, gerne auch als Quereinsteiger:in. Wichtig ist uns, dass Du nicht nur teamfähig bist, sondern bevorzugt gemeinsam arbeitest, auch mit technischen Laien.
Im #Frontend-Bereich von taz.de stehen viele Veränderungen an. Derzeit gestalten und bauen wir unseren Verlagsbereich neu. Als nächstes plant die #taz, den redaktionellen Bereich zu relaunchen. Dabei werden wir vieles überdenken und verändern. Neben der Pflege und der Weiterentwicklung von taz.de erwartet dich ein bunter Strauß an Themen: #Datenschutz, #Tracking, #Ads, #SEO, strukturierte #Daten, #Feeds, #Barrierefreiheit und vieles mehr.
Anforderungen:
- Grundlegende Kenntnisse der für Web-Applikationen nötigen Standards ( #HTML5 , #DOM , #XML , #XSLT , #CSS3 , #JS , #JSON , #HTTP , #REST)
- Sicherer Umgang mit #ES6 , #jQuery und #CSS-Präprozessoren und -frameworks sind wünschenswert
- Du kannst auch ohne #Frameworks responsive #Seiten erstellen, die #crossbrowser funktionieren - eine gewisse Lust auf Urschleim ist nicht verkehrt
- Wir setzen auf Serversite #Rendering und #XML als Datengrundlage, die sich aus verschiedenen #CMS speist. Bislang setzen wir auf #xslt. Das wollen wir durch serverside javascript ablösen.
- Prima wären Erfahrungen mit #nodejs
- Interesse an #UX und #UI
- Einarbeitung und Weiterentwicklung von #Fremdcode
- Sicherer Umgang mit #Versionierungssystemen
- Erfahrungen im Nachrichten- und Verlagsumfeld wären von Vorteil
- Sehr gute #Selbstorganisation
Bei der taz bieten wir nicht nur ein kollegiales Arbeitsumfeld, sondern auch familienfreundliche #Arbeitszeiten (flexible #Vollzeit 36,5h/Woche, remote-Arbeit aktuell bis auf Weiteres aufgrund von #Corona erwünscht, auch danach ist prinzipiell #Home-Office möglich, 30 Tage #Urlaub) – es gibt ein ordentliches (und subventioniertes) #Mittagessen im taz-Café.
Wir wollen diverser werden. Deshalb freuen wir uns besonders über Bewerbungen von People of Color und Menschen mit Behinderung. Deine Perspektiven sind uns wichtig und sollen in der taz vertreten sein. Die Arbeitsplätze und Toiletten sind weitestgehend #barrierefrei. Das taz-Café ist mit dem #Rollstuhl erreichbar.
Schicke uns deine #Bewerbung und zeige uns, welche Kenntnisse und Erfahrungen Du gerne bei der taz entfalten würdest.
Es handelt sich um eine volle unbefristete Stelle ab taz-Lohngruppe V. Auch Teilzeit wäre denkbar, wenn Vollzeit für dich nicht möglich ist. Arbeitsaufnahme zum nächst möglichen Zeitpunkt. Schreibe uns gerne, ab wann Du einsteigen könntest und richte Deine Bewerbung an webjob@taz.de.
Wir freuen uns auch über Weiterleitung, ihr findet die Stellenausschreibung auch unter https://taz.de/jobs
#job #jobs #arbeit #anstellung #jobangebot
Kollegas!
Wollt ihr meine Kollegin werden? Also so richtig jetze? Wir 2 so als Team? Dann bewerbt euch. Nicht bei mir natürlich, Adresse steht ganz unten.
Als Unterstützung für das PHP-Kompetenzteam suchen die Web-Techies von taz.de ab sofort eine:n PHP-Entwickler:in
Die #taz war die erste online lesbare #Tageszeitung Deutschlands. Sie bietet nach wie vor alltäglich die Möglichkeit Dinge anders zu machen und ist immer noch Konzern-unabhängig.
Willst Du mit uns die zunehmend digitale Zukunft des #Journalismus gestalten? Wir bieten ein kooperatives Umfeld, das Raum für #Weiterentwicklung und #Kreativität lässt, aber auch strategisches #Denken erfordert und die Bereitschaft, alltägliche #Probleme auch eigenverantwortlich zu lösen.
Wir suchen zeitnah ein:e Kolleg:in mit praktischer #Berufserfahrung in der #Webentwicklung, gerne auch als Quereinsteiger:in. Wichtig ist uns, dass Du nicht nur teamfähig bist, sondern bevorzugt gemeinsam arbeitest. Als fester Teil unserer zweiköpfigen #PHP-Crew. Existierende #Softwareprojekte sollen mit übernommen werden. Ebenso geht es um die #Pflege, #Fortentwicklung und die #Dokumentation der jeweiligen Projekte. Es sollte also keine Scheu da sein, sich in verschiedenartigen #Fremdcode einzuarbeiten und wir wünschen uns #Offenheit für Altes und Neues gleichermaßen. Ferner solltest Du es gewohnt sein, mit #Versionsverwaltung zu arbeiten.
Von unserer Seite gefragt sind außerdem:
* Erfahrung mit #PHP - #Frameworks - insbesondere #Symfony sind von Vorteil
* Grundlegende Kenntnisse der für #Web-Applikationen nötigen Standards (#DOM, #XML, #XSLT, #CSS, #JS, #JSON, #HTTP, #REST)
* Wissen, worauf es bei relationalen #Datenbanken (insb. #MySQL / #mariaDB) ankommt
* Überblick über verschiedene Web-relevante #Programmiersprachen
* #Linux - #Serverumgebung
* Gerne auch konkrete Kenntnisse in Applikationsentwicklung für die "Invision Community Suite"
* Englischkenntnisse (Literatur) sollten vorhanden sein. Auch nicht-Muttersprachler:innen sind willkommen, alltagstaugliche Deutschkenntnisse gleichwohl notwendig
Wenn Du Lust darauf hast, in einem nach wie vor politisch motivierten Umfeld als Teil des Web-Entwickler:innen-Teams auch abteilungsübergreifend mit vielfältig interessanten #Menschen, mit #Produktentwicklung, #EDV, #Redaktion und #Verlag zusammenzuarbeiten, melde Dich. Bei der taz bieten wir nicht nur ein kollegiales #Arbeitsumfeld, sondern auch familienfreundliche #Arbeitszeiten (flexible 36,5h/Woche, #remote - #Arbeit aktuell bis auf Weiteres aufgrund von #Corona erwünscht, auch danach ist #Home-Office möglich, 30 Tage Urlaub) - es gibt ein ordentliches (und subventioniertes) #Mittagessen im #taz-Café.
Wir wollen diverser werden. Deshalb freuen wir uns besonders über Bewerbungen von People of Color und Menschen mit Behinderung. Deine Perspektiven sind uns wichtig und sollen in der taz vertreten sein. Die Arbeitsplätze und Toiletten sind weitestgehend #barrierefrei. Das taz-Café ist mit dem #Rollstuhl erreichbar.
Gern kannst Du Dich auch mit mir gemeinsam um den #Diaspora - Auftritt kümmern :)
Schicke uns deine Bewerbung und zeige uns, welche Kenntnisse und Erfahrungen Du gerne bei der taz entfalten würdest.
Es handelt sich um eine volle unbefristete Stelle ab taz-Lohngruppe V. Arbeitsaufnahme zum nächst möglichen Zeitpunkt. Schreibe uns gerne, ab wann Du einsteigen könntest und richte Deine Bewerbung an webjob@taz.de
#Stellenangebot #Stellenausschreibung #Job #Programmierung #Programmierer