Abe Massry

Web Development and Building Businesses

Near Field Messenger - Connect With People Around You

Near Field Messenger (NFM) - Connect with People Around You. I have had this idea for a while and I wanted to get it out there to see what people thought. Leave me your feedback and feel free to ask me questions if this is something you’re interested in learning more about.

The Problem

After I graduated college I found myself far away from friends. Facebook wasn’t really a help because it was a directory of all my friends that live too far away to hang out on a regular basis. Twitter is pretty good at finding new people that you get along with, but half of my twitter friends are on the other side of the country. What if there was a way to connect with people that lived close by?

The Solution

Near Field Messenger. What if you could instantly talk to the closest 50 people around you in a shared chat room, right from your smartphone?

In New York City this would be your city block

but in Wyoming it would be the entire state

.

How

How would something like this work? What if there is one person outside the 50 people and there is no one else around them? The idea I had was to start out with a small area and use fuzzy logic, so that if there is 50 - 60 people that’s ok. Why limit it? Because you want to keep it local.

Really, How?

OK so it all works like bubbles. The bubbles are an area and when two peoples bubbles coalesce that becomes a larger bubble so you wont have people that are left out.

After there are over 50 people new bubbles can be formed and the visualization could be different colors.

The point is to get people that are physically close, chatting with each other just so they have someone to talk to and possibly hang out with.

After they meet people they would like to hang out with there can be private messages and you should be able to import your contacts and make new contacts.

Why

After people graduate highschool and move away or graduate college they lose touch with some friends. They still keep those friends but they are far away and don’t get to see each other in person that much. And it gets really lonely. The same thing happened to me and I’ve been really lonely and searching for friends, but outside of a school environment it’s hard to approach people and it’s hard to find people that are free and want to hang out, where you don’t have to drive really far to get there. I don’t live in a city and I feel like there are a lot of other people out there who are lonely too and just want someone to talk to.

Other apps out there / Competitors

meetup.com is there and you can find people in your local area who are interested in the same things you are. The meeting times are formal and there is a lot more to do in cities than in the suburbs or rural areas.

Popcorn Messaging is an app that is similar but only has a 1 mile radius and is kind of like a bulletin board where people can leave messages if no one else is actively using the app in your area. If Popcorn was to change their app a little it might be similar to NFM.

Ralph Chat is an app that is similar but it has a radius as well, it does have global chats so there are people to talk to but they might not be in your area. If they changed their app or added a semi-local chat room it might be similar to NFM.

Conclusion

I hope you like this app idea and support me in making it happen.

Vote on it at: The Coshx 50k Competition

Building Wsend - a Command Line Tool to Easily Send Files

Building wsend - A Command Line Tool to Easily Send Files. I wanted to write a post about the technical aspect of building wsend, after Jon Gottfried suggested I write one. Thanks Jon

Motivation

As any technical project I undertake, I like to explain the motivation behind the project. What is the end state of the project? What is the final product supposed to look like? Most of the time I have an answer to these questions before I start. With wsend the answer was clear, a command line program that gave you a URL for a file up to 10GB in size. It should be simple to install and quick to use and not require any type of complicated setup. There are other command line tools for sending files but they either require me to put in additional information about where I am sending the file like scp, or I have to set this sort of thing up before I start. So I had this idea in mind to scratch my own itch and hopefully you find it useful too. (More motivation also came from: http://xkcd.com/949/).

Server Side

Any time I’m working on a service that uploads a file, I like to start out on the back end. The reason for this is because we have straightforward tools for sending files like HTML upload forms and curl; but many tools for accepting an upload on the server.

My first attempt for a different website was to use PHP.

  • I had found that PHP ran into a 2GB file upload limit, even when running on a 64bit machine.
  • So my next option was Perl and nginx.
  • Perl could handle a large file and so could nginx, but in order to link the two together I needed FastCGI.
  • I ran into a similar 2GB file upload limit with FastCGI.
  • So the next option was node.js

I was able to upload a file up to 10GB and it worked using node. This worked out really well because the application is the server, there’s no layer in between.

The rest of the backend is a standard express app that exposes a simple API using POST for all communication. And all this can be set up using curl on the client side for testing.

And here is an example of the express route used to handle the upload.

Express Upload Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
// upload file from command line
app.post('/upload_cli', function(req, res) {
  var userID = req.param('uid');
  var now = new Date();
  if (userID) {
    models.users.findOne({user_id: userID}, function (err, doc) {
      if (err) throw err;
      if (doc.user_id) {
        var filesize = req.files.filehandle.size;
        var filesizeInt = parseInt(filesize);
        var filename=req.files.filehandle.name;

        // newpath needs a unique folder,
        var uniqueDir = getUniqueDir();
        var newDir = __dirname + "/uploads"+'/'+ uniqueDir + '/';

        fs.mkdir(newDir, 0744, function (err) {
          if(err) { throw err; }
          var newPath = __dirname + "/uploads"+'/'+ uniqueDir + '/' +filename;

          //add a check to make sure newpath is writable
          fs.rename(req.files.filehandle.path, newPath, function(err) {
            if (err) throw err;
            var permalink = "https://wsend.net/"+ uniqueDir + '/' + filename;
            doc.files.push({timestamp: now,
                            filename: filename,
                            type: '',
                            size: filesizeInt,
                            dir: newPath,
                            permalink: permalink,
                            permissions: ''
                          });
            // save file location to db
            doc.save(function(err) {
              if (err) throw err;
            });
            // send permalink as response
            res.send(permalink);
          });
        });
      }
    });
  }
});

Express makes it really easy to handle the file upload. The properties of the file are stored in req.files.filehandle and you save the file to the file system with fs.rename which comes from the fs module of nodejs and you get it with a var fs = require('fs');.

The files are stored on the same server as everything else, while this is not optimal and a service like S3 or a private fileserver should be used, this is small scale right now and when the usage grows an alternative file storage system can be explored. All transfers happen using https and you can encrypt your file before it leaves your computer with a handy script called wsend-gpg which I describe later.

Client Side

Now that we have a server setup with an API that accepts uploads and gives you a URL for a file we can do a lot of fun things on the client. So the first thing to do was write a command line script to handle all of these API calls using curl.

Which language to chose for a command line script?

  • Perl is widely deployed on most Unix / Linux / Mac systems.
  • In order to install a progress bar CPAN was needed.
  • CPAN has to be set up on the client’s machine
  • Bash is the default shell on many Unix installations
  • Bash is a scripting language.
  • cURL has its own built in progress bar.
  • Use bash and curl.

So I started working on the CLI. After all the setup that the script does automatically it comes down to one command.

wsend command
1
wsend file.txt

And it returns you a URL.

Here is the function inside the script that sets everything up and actually sends the file.

wsend upload function
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
sendFile() {
  if [[ -e "$fileOrDirToSend" ]]; then
    if [ -d "$fileOrDirToSend" ]; then
      #we want to send a directory, so make a compressed archive
      fileOrDirToSend=${fileOrDirToSend%/}
      tar cfj "$fileOrDirToSend.tar.bz2" "$fileOrDirToSend"
      fileToSend="$fileOrDirToSend.tar.bz2"
    elif [ -e "$fileOrDirToSend" ]; then
      fileToSend=$fileOrDirToSend
    fi

    if [ "$clientOS" == "Darwin" ]; then
      fileToSendSize=$(stat -f %z "$fileToSend")
    else
      fileToSendSize=$(stat -c%s "$fileToSend")
    fi

    getAccountSpace
    if [ "$accountSizeAvailable" == "not enough space in your account for this transfer" ]; then
      notEnoughSpaceErr
    elif [ "$accountSizeAvailable" == "file is too big for your account size" ]; then
      filesizeTooLarge
    else
      if [[ $link ]]; then
        #link was provided, so update target link with file
        curlReturn=$(curl -F "uid=$id" -F "link=$link" -F "filehandle=@$fileToSend" $host/update_cli)
      else
        #simply create a new one
        curlReturn=$(curl -F "uid=$id" -F "filehandle=@$fileToSend" $host/upload_cli)
        echo "$curlReturn|$(make_absolute "$fileOrDirToSend")" >> "$wsend_base/.list"
      fi
      echo $curlReturn
    fi

    if [ -d "$fileOrDirToSend" ]; then
      #remove our temporary file
      rm "$fileToSend"
    fi
  elif [ "$fileSendBool" == "true" ]; then
    #want to send file, but source doesn't exist
    echoUsage="true"
  fi
}

Inside the script this is the main command that actually sends your file.

curl the file
1
curl -F "uid=$id" -F "filehandle=@$fileToSend" $host/upload_cli

Where $host is https://wsend.net. The return of that command is your URL. Its really simple, but sometimes simple works out the best.

Other Client Possibilities

Now that we have an API and a command line utility we can do some interesting stuff:

Future projects like these could include:

  • Writing clients in Ruby, Python, Perl, C …
  • Distributing with rubygems, pip, CPAN, apt, yum, pacman …
  • Or anything else you can thing of and do.

Outcome

It’s been a really good learning experience and it has been my first opportunity to work with others in open source on my projects, usually I’m a contributor to someone else’s project but when people are submitting me code to review and incorporate into my open source project I really get a feeling that the community is working together as a whole. Its really amazing that people believe in my ideas and like them enough to contribute and I’m really grateful for the help. It was really exciting to get my first pull request and I hope to work with others on this in the future.

Thanks Jon, for encouraging me to write this post.

Check out the site at https://wsend.net

Follow me on twitter @abemassry

5 Things I Use for Productivity

Before I consciously thought about productivity I had an organically grown method and wide range of tools I used that wasn’t necessarily the most productive way of being productive. Then I stepped back and looked at what worked the best and what didn’t.

First the history:

In highschool and college we had planners issued to us once a year. Everything I did went into them, and I stayed productive and was on top of things because I checked the planner relentlessly, I always carried it with me because I carried a backpack everywhere I went.

When I started working I bought a pocket sized planner but it was too small and it could fit easily in a jacket pocket but not carry it with me.

Then came the smart phones, first it was email and calendaring, this was good but not the best. It was difficult to check and the phone didn’t have everything plainly listed, you had to search for what you had to do that day. One positive was, now there were automated reminders and alarms that went off when a meeting was coming up.

These tools were difficult to use but you could use them properly to get by.

What I use now:

I’ll list these in reverse order from now as I have just started using the newer tools and they are more at the forefront of my mind because they haven’t made it into my unthinking routine.

  1. Bullet Journal

    I saw this just recently and I realized I was the most productive when I had one true source, in a handwritten book, that I referred to. I jumped in headfirst and said “This is what I’ll be using from now on”. It make sense to me and the other parts of the planner that were printed on the page, I draw them in. The only problem in my usage of this is I’m not always able to carry something with me, as it needs to fit in my pocket which is why I also use:

  2. Evernote

    I use it as a backup to bullet journal when I want to take a note to write it down later and I only have my phone with me, or I use it to take a picture of something that I want to remember. It kinda serves the role of my own personal external memory. I used to use a point and shoot digital camera for this.

  3. Lift

    I usually check this around 2 times a day. As soon as I get up, and before I go to bed. This reminds me of the things I have to do on a daily basis, but since I should be doing them every day, I wouldn’t write them down. I also use this to visually show progress on a long running goal.

  4. Tempo

    I switched to tempo for my calendaring app and I really like the experience. I can enter something on google calendar (which I use for desktop calendaring) and it will automatically show up and pull in relevant data to each meeting or appointment.

  5. My Brain

    All the others are some form of Tech (analog tech in the case of the bullet journal). But in an ideal world I’d have a photographic memory, I would remember all the important things as well as the semi important things, and an internal chronometer that would alert me when the things in my calendar were coming up.

    The reason why I use items 1-4 and why I think they are useful is that it frees my mind to focus on one task at a time and I can complete the task efficiently when I’m able to focus. I actually forget an item temporarily when I write it down or put it in a calendar and either a machine reminds me I have something to do or I have a break in work and I wonder what it is I have to do and it’s written down.

I’m Abe and I make a tool for being more productive on the command line and on the web for sending files. Check out Wsend

NTFS on OSX 10.7

This is mostly a reminder to myself on how to get NTFS reading and writing on a mac working properly. You might have some use for it if you run into this problem as I have today. I’ll have to make another post once I update to 10.8 or 10.9.

1. Download NTFS 3G

First download and install this file: ntfs_3g Use the first option about it being the safest

2. Download OSXFuse

Next download and install this file: OSXFuse And choose the MacFuse compatibility layer

3. Download the fuse wait patch

Next download and install this file: fuse_wait I don’t think there should be any special options for this one but I might have forgotten about something, so let me know.

I haven’t included screenshots so there is some updating that can be done. If you run into any problems discuss them in the comments, and I’ll try to take screenshots on the next machine I run across.

Wsend Twitter Card

If you upload an image using wsend and want to share it on twitter it doesn’t show up as a twitter card by default. I wanted to have it render in the stream.

So I started the repo wsend-twitter-card. The script will take the image provided and create a link that will allow twitter to render a twitter card.

Here is the final product:

This is just one example of an app that can be built on top of wsend. In the future I hope to create more apps built on top and use the Unix philosophy of each program doing one thing and doing it well.

New App: Wsend - the Opposite of Wget

A quick post to let everyone know I just launched a new website with a cli interface. I was looking for a quicker way to send files from the command line so I created wsend. I based it off of wget because wget is very quick and easy to use if you want to download a file from the web to your current directory. You can check it out at:

wsend.net

Working With Multiple Cores in Python

Challenges

I just created this new project called Crazip because I wanted to try to write a program using multiple cores in python. Normally I would use threads for something like this because it would cut down on the need for interprocess communication. It turns out that using threads in python doesn’t allow the program to run on multiple cores with the default installation. So there is the multiprocessing library to get the program to run on multiple cores simultaneously. After this the program showed up on multiple cores when run and then the shared memory was tweaked to get the separate processes communicating.

The Idea

The idea is to take the hash of a file and then try to recreate the file with a known length of random ones and zeros. This works very well for very small file sizes but the brute force method works terribly for large sized files. The limit seems to be the number of random tries needed to achieve the reverse hash. If one had access to a supercomputer with thousands of cores this might make sense. Then again the question remains as to why you would need to compress something so heavily if you had access to that kind of hardware.

Creating and Displaying Subdocs and Nested Data With Mongodb and Nodejs

If I was using MySQL and creating tables for a relational database I have a couple of ways to show comments and likes for an article on a blog for example. The first way is to select my post from a table and then based on that ID of the post select my comments and likes for that post. If I have properly normalized my tables this works well. The problem with this is requires extra processing in my application code. The next way is to use a join and write MySQL code to join the tables where there is a match and output the results. This is ok but it requires more MySQL processing.

The question becomes how to do this in mongodb, a document oriented database without tables. This turns out to be simpler to do and understand than the MySQL equivalent.

1. Set up your document Schema

For this code I’m using Node.js and mongoose to talk to mongodb, and jade to display.

Set Up Schema - schema.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
  var Likes = new Schema({
    like_id: ObjectId,
    name: String,
    date_liked: Date,
  });
  exports.Likes = mongoose.model('Likes', Likes);

  var Comments = new Schema({
    comment_id: ObjectId,
    user: String,
    user_photo: String,
    body: String,
    date_commented: Date,
  });
  exports.Comments = mongoose.model('Comments', Comments);

  var schema = new Schema({
    blog_id: ObjectId,
    body: String,
    lang: String,
    title: String,
    date: Date,
    submitted_by: String,
    submitted_by_photo: String,
    likes: [Likes],
    comments: [Comments]
  });

The main schema is the “var schema” this is the blog post. The blog post contains likes and comments, of each there will be many. Those are stored in objects and there can be multiple objects in each of the likes and comments. The advantage of this is that you can make one database call and get all the information about a post at once, so you need less application to database communication. And it is also easier to understand when you think about this. For example you can add a tag schema store in another object and it can have an arbitrary number of tags.

2. Reading nested Data out from mongodb

Here I’m using an express app to read the data out and send it to a jade file for displaying.

Reading data from mongodb in nodejs using mongoose - read.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
app.get('/blog/:id', function(req, res){
    if (req.user) {
      var username = req.user.username;
    }
    //get the example
    models.posts.findById(req.params.id, function(err, data){
      if (data) {
        models.users.findOne({name: username}, function(err, userdata){
         //render the view page
         res.render('blog.jade', {
             locals: {
               title: data.title,
               page: '',
               article: data,
               user: username,
               userdata: userdata
             }
         });
        });
      } else {
        res.redirect('/404');
      }
    });
  });

There are no differences here reading from a mongodb doc that has nested data vs non nested data.

3. Output the data in jade.

There are two each loops that run through the nested data. If there are multiple nested data on a page, like the main page of the blog, then you can loop through the blog articles and then have a sub loop for each of the comment sections.

Displaying nested data in jade - blog.jade
1
2
3
4
5
6
7
- each like in article.likes
  - i++;
button.btn.like(type="button", id="like_#{article.id}")
  i.icon-thumbs-up
  | Like
- each comment in article.comments
  - j++;

4. Posting data to mongodb

In order for the app to be useful there has to be some way for users to like and comment on articles. Here is an example of an express route for commenting on an article.

comment on an article express route - comment.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
app.post('/comment', function(req, res){
    if (req.user) {
      models.users.findOne({name: req.user.username}, function(err, userdata){
        var picture = userdata.photo;
        var now = new Date();
        blog_id = req.param('blog_id');
        models.posts.findById(blog_id, function(err, blog){
          if (err) return handleError(err);
          blog.comments.push({user: req.user.username, user_photo: picture, body: req.param('comment'), date_commented: now});
          var doc = blog.comments[0];
          console.log(doc);
          blog.save(function(err) {
            console.log('error check');
            if(err) { throw err; }
            console.log('saved');
            res.redirect('/blog/'+blog_id);
          });
        });
      });
    }
  });

If you see an area where the code can be improved let me know. I’d like to incorporate all of this into a framework at some point.

What Entrepreneurs Can Learn From LOST

The TV show Lost managed to nail three things right in the product category: the right product, the right place, and the right time.

Originally I didn’t catch Lost when it aired on TV, I marathoned all the seasons when they were released on netflix. But after watching the show I could see why my friends and coworkers were hooked. I didn’t put the connection together till now, of how the show could be applied to everyday life.

It was one of the first shows to wholeheartedly make the jump to HD.

When it first aired to when it ended, it made the jump to HD and was better for it. The cinematographers really took advantage of not only the higher resolutions but also the lush landscapes that were present in the show because they were shooting on location. This had to do with the timing and taking advantage of the technology that was present at the time.

It was available on DVD after the season had aired.

While DVD is not an HD viewing experience per se, the quality was a lot better than VHS tapes. And while there were other shows that did this as well, they nailed this one as well.

It was available for online streaming.

Hulu was starting to come out with content at the time, but ABC had its own streaming solution from its site. Whether or not you agree with either of these, people could still watch it on the internet at their leisure, not encumbered by when it aired. They could also see higher quality video then if they had set their VCRs to record it. In addition DVRs were starting to come into prominence and it had the same benefit as online streaming when it came to picture quality.

So I keep belaboring quality when it comes to all these things, what made this particular show so special, there were other HD shows at the time, some of them even filmed on location. It was the fact that it was filmed on location with an almost cinema quality, the writing was able to capture the audience in a way that kept them coming back week after week and the actors were able to execute the story in a way that it was believable that they were trapped on this faraway island.

In total they had the right product, the right team to execute it, and they benefitted from producing the show at the right time that all of these auxiliary technologies came together and facilitated getting the product produced at a high quality into the hands of the viewers. They also had the insight to know which tech to pursue and how to use it effectively.

Observations on Mashrd

Just a quick observation, it seems like more stories that make it to the popular section are about either politics or Apple products. I might make a chart and have graphs to back up my observations next time. But it makes sense given the time of year it is and the election is happening in a week and Apple just unveiled more products. Still there is a lot of news that doesn’t see the light of day. Not sure if it’s a good thing or a bad thing.