Serving multiple websites from sub routes of a domain using Nginx

Few days earlier I was working on migrating the PyConf Hyderabad website from GitHub Pages to a DigitalOcean server in order to achieve archive support for the previous year and current website. The websites are static websites generated by Jekyll.

Objective

The objective was to serve various years of the conference websites in year wise sub routes. The domain was pyconf.hydpy.org . So we wanted something like pyconf.hydpy.org/2017/ , pyconf.hydpy.org/2019/ . We had two separate repositories for both of the websites which we wanted to serve in the stated manner.

Solution

Nginx directives have an excellent way of accomplishing this by leveraging the power of regular expressions. Nginx is being widely used for most of the production level web servers. I was also using Nginx as the server in the DigitalOcean droplet.

Since the websites were in two different repos all I had to do is – whenever a request comes with a specific subroute I would point it to the specific directory path. This was done in the Nginx location directive. I had to change the Nginx config with the following similar code

location = / {
    rewrite "^.*$" /2019/ redirect;
}

location ~ ^/2017(.*)$ {
    alias /home/pyconf/hydpyconf2017/_site/$1;
}

location ~ ^/2019(.*)$ {
    alias /home/pyconf/hydpyconf2019/_site/$1;
}

Let us understand this step wise. There are two parts of this configuration

Step 1

Redirect home url to /2019/ domain . This is to redirect the root domain to the sub route of the present year website. The = modifier after the location directive matches the exact route which uses the rewrite directive to redirect it to the specified replacement expression.

Step 2

The ~ modifier is used for case sensitive regular expression matching. Here we specify that if we match a route starting with /2017 search that path in the directory specified by the alias directive. The $1 refers to the first matched group in the regex. We can use the groups in regex to form the replacement strings that we want to point to.

Thus you can see with the help of regular expressions we are able to serve two different static websites using Nginx directives.

PyCon India 2019 Experience

What an experience it was! This was my second time attending PyCon India and third time attending a Python Conference. I have been a part of the organising team of PyConf Hyderabad 2017 and PyCon India 2018 (Hyderabad). Each time it has been a fulfilling experience and numerous takeaways and new bonds of friendship made. I haven’t blogged about my experiences in past conferences. So this will be my first blog post regarding my Conference experience.

The last 2 times I had attended Conferences close to where I was presently living. This time PyCon India was held in its 11th Edition at Chennai. Being organiser in past Conferences I knew about the wonderful experiences that I was involved, so I decided to be part of the Volunteering team for this year’s PyCon. I was eagerly waiting for the event from when it was announced. PyCon had become a mandatory event for me now, where I get to meet old friends and make new ones.

Day 0

I reached Chennai on 11th Oct night, before the Conference to help out with volunteer activities. I went straight to the Conference venue at Chennai Trade Centre where many volunteers had gathered for helping out with Pre Conference activities and goodie bag packing. This is like a fun event where we introduce ourselves and assign tasks for the upcoming days. I met HydPy folks – Ram, Murthy, Gokul; Dgplug friends – Rayan, Devesh, Chandan, Bhavin and many more friends whom I met at last PyCon. Chandan was co-ordinating the Volunteer team. I also met Vijay, Naren and folks from Chennaipy who were the organisers this year. After Dinner we started packing the goodies bags and sorting Attendee ID cards and getting prepared for the big day. It was almost 2 am when we left the venue to our hotel. Unfortunately my Oyo booking was cancelled, thanks to Murthy who shared his accommodation for the night.

Day 1

It was the first day of the Conference. I reached the venue around 8 am, there were already people coming in for registration. So I sat at the Registration Desk to help with Registration. I met with Kuntal (@hellozee) from Dgplug and Rayan who helped with the Registrations. Within no time there was a big queue of people. After sometime when the crowd lessened I went for Breakfast where I met Chirag, who came for the day. We discussed regarding the Upcoming PyConf Hyderabad 2019 event. After that I met all folks from HydPy who had come up for the conference. Here’s our group photo

After that I went to attend Pradyun‘s talk on Python Packaging . It was a really insightful talk explaining the pip ecosystem. Soon after the talk I started going around Sponsor booths to know more about their business and also to solve the puzzles they were gibing out to get some goodies ūüôā This year PyCon India had a long list of Sponsors – 41 of them ! Kudos to PyCon India team!

Post lunch there was Poster Presentation Scheduled where I was going to talk about HydPy Community. Also as I was a volunteer helped in setting up for Poster sessions as well. Folks from HydPy were also there, we also talked about our upcoming conference – PyConf Hyderabad on Dec 7-8. The day got over with ASL Devi‘s Keynote regarding Bridging gender gap in tech. It was an eye opening talk on different gender stereotypes and the unconscious biases that we have in our mind and how we can overcome this together. After that we had Volunteer’s meet to retrospect and discuss feedback regarding the day. The day ended with the Volunteer and Speakers dinner.

Day 2

My Day started again in the registration desk where I sat down to help with Lightning Talk registration and giving out ID cards to the attendees who missed taking it last day. Shortly after I went to the First Keynote of the day by Ines Montani . The Talk was titled Let Them Write Code . The Talk covered some best practices in development and coding, her Lessons from Open Source and much more. We had the Dgplug Staircase Meeting today which is a customary meeting that we have in PyCon India each other where we meet and greet people of Dgplug Community whom we have always seen in #dgplug IRC channel. It’s the time when we actually meet and talk to these people we see online. Kushal‘s absence was felt this time. Sayan was hosting this meeting and he discussed some important concerns regarding the community and the need for people interacting more in the IRC channel. After that he distributed Dgplug T-shirts which we had ordered before the Conference. We had the mandatory Dgplug group photo after that ūüôā

I was helping out at the Helpdesk mostly rest of the day and interacting with people whom I met. There was the PyLadies booth just beside the Helpdesk where I talked to women from PyLadies Chennai community. They had a PyLadies Lunch last day and a Speed Mentoring Session to invite more Women in Tech. Finally we were up to the Final Keynote of the day by David Beazley . The Talk was titled – A Talk Near the Future of Python. After he started his Live Coding session in the Keynote the audience was simply going ga ga over his performance. He Live coded a Stack Machine which he later turnd into a Web Assembly compiler which played a Rocket Game at the end. He showed Python has a bright future by demonstrating the power of Python + Webassembly. This was The best Talk I have ever attended till date. He already published the screencast. Do have a look and amaze yourself. The thing that most impressed me most was his fearless stunt in front of 1200+ attendees and calmly handling the intermittent crashes and debugging them.

Finally PyCon India ended with Vijay‘s closing Address who was the Chair for this year’s PyCon. It was the end of the Conference after which we there was Workshop + Devsprints for next 2 days. At the end we had group photo with all the attendees.

PyCon India has been successful for the large pool of people who volunteer at the event and come from all parts of the country for the love for the Community. It has been a pleasure to work and interact with these amazing people. I met Noah this year who came all the way from Taiwan and volunteered from Day 0 for the event. It really amazes me to see the enthusiasm of the people for the Community.

Day 3

Oct 14-15 were Workshop + Devsprint Days. The venue was IIT Madras Research Park. I was also leaving tonight so I packed my bags and went to the venue in the morning. I started helping with the registrations for Workshops and Devsprints and guiding people to the respective rooms. Then I went to the Devsprint room to work on the Python Packaging Sprint which was hosted by Pradyun. I was eagerly waiting for that. I started with the setup and then working on a issue. After Lunch I found David Beazley in the venue so I quickly joined for a quick conversation with him. I asked the secret behind his Live coding stunt last day. He said he did roughly 15 iterations of the run, even spent 45 mins to debug an issue in one iteration. But yeah the fluency came over his 35 years of experience ! Also as he frequently took classes with group of students where he used to live code during his classes. It was a real pleasure to talk to him. After Lunch I continued to work on another issue from pip till the end of the day.

Finally it was time to say goodbye to friends and the awesome 3 days taking back lots of memories and experiences. PyCon India has become an event close to my heart now and it’s an event I don’t want to miss. See everyone again in next year ūüôā

Summary

PyCon India has let me meet and communicate with people from all parts of world and domain. There is something to learn in each of these events. It’s An event by the Community. For the Community. My takeway this year was – Work, Interact and Share your experiences. You can always be a good at coding and be an excellent programmer, but unless you interact with more people and share your learning you cannot be a good person. Also Volunteer and give back to the Community. Volunteering requires time and effort, but its a gesture to give back to the Community and meeting people who are doing amazing work for the Community. The entire Open Source world largely revolves around people volunteering their time for the good of the Community. So even a small effort towards that can make a big change.

That’s all about my PyCon India 2019 experience . Do leave your comments ! And if this excites you do come next year to PyCon India ūüôā

Understanding python requests

In this post I am going to discuss the python-requests library. Python-requests is a powerful HTTP library that helps you make HTTP(s) requests very easily by writing minimal amount of code and also allows Basic HTTP Authentication out of the box. But before I write this post I want to describe the motivation behind me writing this post.

When it comes to writing software, libraries are a lifesaver. There is a library that addresses almost every problem you need to solve. That was the case for me as well. Whenever I used to face a specific problem I would look to see, if a library already existed. But I never tried to understand how they were implemented, the hard work that goes into building them, or the folks behind the libraries. Most of the libraries we use these days are open source and their source code is available somewhere. So we could, if we wished to, with a little hard work, understand the implementation.

During a related discussion with mbuf in #dgplug channel, he gave me a assignment to understand one of the libraries I have recently used and understand what data structures/algorithms are used. So I chose to look inside the source code of python-requests . Let’s begin by understanding how two nodes in a network actually communicate.

Socket Programming : The basis of all Networking Applications

Socket Programming is a way of connecting two nodes in a network and letting them communicate with each other. Usually, one node acts a server and other as a client. The server node listens to a port for an IP, while the client reaches out to make a connection. The combination of port and an IP is called a socket. The listener socket in the server listens to request from the client.

This is the basis of all Web Browsing that happens on the Internet. Let us see how a basic client-server socket program looks like

 

 

 

 


# A simple Client Program for making requests using socket
import socket
s = socket.socket()
# The socket listens to a specific port
port = 12345
# Connect to the server
s.connect(('127.0.0.1', port))
# Send Request to server
send_data = 'Hello'
s.send(send_data.encode('utf-8'))
# Receive response from server
recv_data = s.recv(1024)
print(recv_data.decode('utf-8'))
# close the connection
s.close()

view raw

client.py

hosted with ❤ by GitHub


# A simple server program for listening to requests via socket
import socket
s = socket.socket()
print('Socket Created')
# The socket listens to a specific port
port = 12345
# Bind the socket to a port
# Here the socket listens the request
# coming from any IP address in the network
s.bind(('', port))
print('Socket binded to port %s' % port)
# Put the socket in listening mode
s.listen(5)
print('Socket listening')
while True:
# Establish Connection with the client
c, addr = s.accept()
print('Got Connection from', addr)
# Receive request from client
recv_data = c.recv(1024)
print(recv_data.decode('utf-8'))
# Send a response to client
send_data = 'Thank you for connecting'
c.send(send_data.encode('utf-8'))
# Close the connection
c.close()

view raw

server.py

hosted with ❤ by GitHub

As you can see a server binds to a port where it listens to any incoming request. In our case it is listening to all network interfaces 0.0.0.0 (which is represented by an empty string) at a random port 12345. For a HTTP Server the default port is 80. The server accepts any incoming request from a client and then sends a response and closes the connection.

When a client wants to connect to a server it connects to the port the server is listening on, and sends in the request. In this case we send the request to 127.0.0.1 which is the IP of the local computer known as localhost.

This is how any client-server communication would look like. But there is obviously lot more to it. There will be more than one request coming to a server so we will need multi-threaded server to handle it. In this case I sent simple text. But there could be different types of data like images, files etc.

Most of the communication that happens over the web uses HTTP which is a protocol to handle exchange and transfer of hypertext i.e. the output of the web pages we visit. Then there is HTTPS which is the secure version of HTTP which encrypts the communication happening over the network using protocols like TLS.

Making HTTP Requests in Python

Handling HTTP/HTTPS requests in an application can be complex and so we have libraries in every programming language that make our life easier. In Python there are quite a few libraries that can be used for working with HTTP. The most basic is the http.client which is a cpython library. The http.client uses socket programs that is used to make the request. Here’s how we make a HTTP request using http.client

 

 

 


Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

For making Requests that involve Authentication we have to use Authorization headers in the request header. We have used the base64 library here for generating a Base64 encoded Authorization String.

Using python-requests for making HTTP requests

The http.client library is a very basic library for making HTTP requests and its not used directly for making complex HTTP requests. Requests is a library that wraps around http.client and gives us a really friendly interface to handle all kinds of http(s) requests, simple or complex and takes care of lots of other nitty gritty, e.g., TLS security for HTTPS requests.

Requests heavily depends on urllib3 library which in turn uses the http.client library. This sample shows how requests is used for making HTTP requests

 


Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

You can see making requests is much simpler using requests module. Also it gracefully handles which protocol to use by parsing the URL of the request

Let us now go over the implementation

Inspecting requests

The requests api contains method names similar to the type of request. So there is get, post, put, patch, delete, head methods.

Given below is a rough UML class diagram of the most important classes of the requests library

When we make a request using the request api the following things happen

1. Call to Session.request() method

Whenever we make a request using the requests api it calls a requests.request() method which in turn Calls the Session.request() method by creating a new session object. The request() method then creates a Request object and then prepares to make a request.

2. Create a PreparedRequest object

The request() method creates a PreparedRequest object using the Request object and prepares it for request

3. Prepare for the Request

The PreparedRequest object then makes a call to the prepare() method to prepare for the request. The prepare method makes a call to prepare_method(), prepare_url(), prepare_headers(), prepare_cookies(), prepare_body(), prepare_auth(), and prepare_hooks() methods. These methods does some pre-processing on the various request parameters

4. Send the Request

The Session object then calls the send() method to send the request. The send() method then gets the HTTPAdapter object which makes the request

5. Get the Response

The HTTPAdapter makes a call to its send() method which gets a connection object using get_connection() which then sends the request. It then gets the Response object using the request object and the httplib response from httplib library (httplib is the python2 version of http.client)

And now onwards, How does a request actually get sent and how do we get a httplib response ?

Enter the urllib3 module

The urllib3 module is used internally by requests to send the HTTP request. When the control comes to the HTTPAdapter.send() method the following things happen

1. Get the Connection object

The HTTPAdapter gets the connection object using the get_connection() method. It returns a urllib3.ConnectionPool object. The ConnectionPool object actually makes the request.

2. Check if the request is chunked and make the request

The request is checked to see if it’s chunked or not. If it is not chunked a call to urlopen() method of ConnectionPool object is made. The urlopen() method makes the lowest level call to make the request using the httplib(http.client in python3) library. So it takes in a lot of arguments from the PreparedRequest object.

If the request is chunked a new connection object is created, this time, the HTTPConnection object of httplib. The connection object will be used to send the request body in chunks using the HTTPConnection.send() method which uses socket program to send the request.

3. Get the httplib response

The httplib response is generated using the urlopen() method if the request is not chunked and if the request is chunked it is generated using the getresponse() method of httplib. Httplib then uses socket program to get the response.

And there you have it! The most important parts of the requests workflow. There is a lot more that you can know by reading the code further.

Libraries make the life of a developer simpler by solving a specific problem and making the code shareable and widespread. There’s also a lot of hard work involved in maintaining the library. So in case you are a regular user of a library do consider reading the source code if its available and contributing to it if possible.

Thanks to kennethreitz and the requests community for making our life easier with requests!

References

  1. https://www.geeksforgeeks.org/socket-programming-python/
  2. https://docs.python.org/2/howto/sockets.html
  3. https://en.wikipedia.org/wiki/HTTPS
  4. https://docs.python.org/3/library/http.client.html
  5. https://github.com/requests/requests
  6. https://github.com/urllib3/urllib3
  7. https://tutorialspoint.com/uml/uml_class_diagram.htm

Also many Thanks to #dgplug friends for helping me improving this post.

Getting Aboard dgplug Summer Training 2018

This year I decided to join the dgplug Summer Training conducted by the Durgapur Linux User’s Group. I didn’t blog for a long time since last year, but last class we had our class on blogging and it really motivated me to start my blog again.

I knew dgplug from my initial days of my college since I studied at Durgapur. I got to know about this Training in 2017 when I first met Kushal in FOSSASIA Summit and searched for dgplug’s website. I have been willing to join it since then, but last year I couldn’t really make time. So after I saw the tweet about this year’s edition I was eagerly waiting to get aboard.

The First Class

It started on 17th June, 2018 at 7 pm sharp. I had my flight to Hyderabad that day on 10.30 pm and I didn’t want to miss the first class so I left early to reach the Airport before the class starts. The class would take on IRC on #dgplug channel on freenode. I just reached on time at the airport and I had to do my check in as well. So I stood in the queue connecting to IRC from my mobile using Riot app and eagerly waited for it to start People were doing countdown before it started and I was amazed to see the number of IRC handles active during the session. The session started, everyone greeted each other. Then Kushal started with some basic rules for the class.There was an interesting way to ask questions during the class. We had to type a “!” and a bot named “batul” would keep queuing those and we had to wait until our turn came and batul prompted us to ask our question. The teacher could take the questions by typing “next”. I had a slow internet connection and Riot took time to sync the messages. But I managed to follow along. Kushal said to read the FAQs for the training and ask questions in case we have any doubt. Then we were asked to introduce ourselves by raising hands using a “!”. We were then guided with the rules for every class and asked to read How to ask smart questions. I realized this is something which should actually be taught in every community so that we ask meaningful questions. Then they were open for questions. At the end of the session we were asked to read some links that were given as home work. The next session would happen the next day.

This was like the format of each session and mind it it was totally conducted on IRC which enabled really fast communication in low bandwidth. Though the entire communication was text based it really felt as if I was in a real class. There is Roll Call at the beginning and end of class, I had to raise hand for asking question, had to wait for my turn to ask my question and was given homework that was not at all over burdening. It is just the right amount given till now and I could get time to read through the reading materials provided after the class and was also able to manage it between my Office schedule.

The best part of the training so far was that we were made to think and come out of our comfort zone in order to do a task. We had to think and search even before asking a question. I believe that is a good practice for everything we have a question.

Already 2 weeks of the training session is complete and I learnt a lot of things in the way.

First Week

There was a class on Free Software Communication guidelines which was taken by Shakthi Kannan (aka mbuf) where we were taught about mailing list etiquette and other communication guidelines. The slide on mailing list etiquette and Communication will really help me in the long run.

Then there was a class on Linux Command line by Jason Braganza which taught us to get started with Basic Linux commands. The LYM (Linux for You and Me) book is a  good book for newcomers and I really find it easy to follow. It is being used in all our Linux command line sessions and we are asked to read few chapters at a time and then we have doubt sessions where we can ask questions.

On Friday, 21st June we had our first guest session by Harish Pillay. The session was very informative. He told us about the time of Internet when it had just started and was known as ARPANET. He told us the importance of contributing to Free Software/Open Source and Red Hat’s Open culture for contributing to Open Source projects. Then there were all sorts of questions he answered on Startups, open source contribution, Licenses, distros etc.

In the weekend we were¬† asked to watch 2 documentaries. One was¬† The internet’s Own Boy and Citizenfour. I could complete watching the first one. I got to know a lot more about the History of Internet and Free Software Movement. Also we were asked to read this article¬†where I got to know how the History of Free Software movement is deeply rooted to the Freedom of Speech and Expression. I got to learn more about The “Free as in Freedom” ideology, the origin of the GNU project and the importance of Free Software. I did a bit more research on GNU/Linux Operating systems and understood what a kernel means with respect to an Operating System and that it requires a lot more things than a kernel to make a OS work.

Second Week

In the next week we had a class on Privacy and Opsec from Kushal. He told us how to do a Threat Modelling of our Risks and Good practices to ensure better security. I already installed Tor Browser and have been trying to use it as my browser now. Giving up on Google is hard but I am trying my best to use DuckDuckgo for searches now. I haven’t really switched to using password managers. It will take me some time before I switch over using a password manager since I have to go over its usage to get familiar with it.

On Wednesday, 27th June we had guest session by Nicholas H.Tollervey. He is a classically trained musician, philosophy graduate, teacher, writer and software developer.  It really inspired me to see if a person loves something he can learn anything from whichever field he/she comes from. He is one of the pioneers of the micro:bit project. He answered questions on open source contribution and best practices for writing software.

On Thurday, 28th June we had guest session from Pirate Praveen. He is a political activist who uses Free Software Principles in his work and formed the Indian Pirates organisation. He is an upstream contributor to debian and has packaged software like GitLab and Diaspora so that they can be easily setup just by running one command. The motivation for him to contribute to open source projects was because he wanted to solve a problem. He answered questions about his entry into politics, packaging and the way of solving problems.

Then we had a class of blogging taken by Jason Braganza who explained the nitty-gritty of blogging. Over the weekend I completed watching Citizenfour and Nothing to Hide which we were recommended to watch. It made me know a lot of things about Privacy and Surveillance which I  was unaware off. Privacy is a basic right and we should all be aware how to protect it.

The experience has been good so far and I am hoping to learn a lot more things in the upcoming days. Thanks to the awesome team who has been working hard to make this possible ūüôā

 

 

 

 

 

Open Source for Beginners with Google Code-in

Google Code-in 2016 has already started and its a pleasure that I am a part of it this time working as a mentor with FOSSASIA on behalf on Public Lab which is working as a partner org with FOSSASIA. It is in its 7th consecutive year.

Google Code-in is a contest introducing pre-university students between the ages 13 to 17 to open source development. GCI takes place entirely online and is held during Winter every year during this time. The official dates are from 28 November, 2016 Р16 January, 2017 this time. Follow the timeline for detailed schedule. Winners get exciting goodies and T-Shirt from Google and a chance to visit the Google US office for a one week trip with their parents.

To many of you reading this post the word Open Source might be completely new. So let me start with very basics.

Every software that you use starting from your Operating system (Windows, Linux etc.) to any application, is written with some sort of code. So the beautiful and mind blowing applications that you use in your computers or mobiles is the hardwork of some awesome developers who write this code. So many of you out there might have¬†a curiousness to look behind the code that caused it. But not all software are “Open Source” meaning not all of them provide you access to their code.

Open source Software means software whose code is available to all. You can use it, modify it and distribute it ( subject to the License provided along with it ). And the most awesome part is you can actually contribute to it and help it to improve.

Now the most important question. How to actually contribute to Open Source ? ¬†The task may look daunting seeing the millions of lines of code beneath. But believe me it’s actually fun once you get started with it. And what you will love the most about Open Source once you land in it is that you will always find people to help you. So Open Source isn’t just lines of code, it refers to the whole community who preach it and are involved with it. And once you get started with it you will actually fall in love with it.

So if you are completely blank about Open Source ( or already a Open Source contributor ) and want to know more about it and meet the criteria ( between 13 and 17 years of age ) Google Code-in 2016¬†is just the best place for you to get started. ¬†You don’t have know about Coding even to get started. There are many beginner and non-Coding tasks to make you acquainted to Open Source. So why wait ? Just register yourself for Google Code-in 2016 !

How to Get Started ?

These are few basic steps that you should follow

  1. Create an account on Github
  2. Read the Guides and About Section in Google Code-in website. Be aware of the Timeline.
  3. Register yourself on Google Code-in site
  4. Search for an Organization like FOSSASIA ( which I am mentoring for )
  5. View tasks and choose a Task labelled Beginner to get  started
  6. Claim the Task and follow the instructions provided to complete it
  7. Get it reviewed by the Mentor

What you need to Know ?

Well there’s nothing much you need to know. There are many non-Coding tasks like writing blog posts, making a video, improving documentations. So you really don’t need to know anything except doing¬†conversations in English which is a must as it is the medium of Conversation since it is an worldwide event. It is specially designed to encourage new comers to Open Source and make them learn. But if you already know Coding in any scripting, programming and markup language it is a plus point and you can approach coding tasks. Even if you don’t know Coding it’s completely OK as you can start learning from here.

A short introduction to Git and Github

Well this is a lot of talk and if you recall I mentioned it is completely online and you work from home. But did you wonder how can you actually work on the same piece of code sitting at remote places when you have so many people working on the same code? Won’t it get all messed up when people try to change things at the same time ( at the same line of code to be specific ). So here comes the concept of Version Control System which actually helps to deal with this and Git is a software that helps in Version Control. You don’t need to get scared as this is nothing but a way to work with different versions of code existing with each user after changes are made. To learn more on it just wait for my next blog Post. And this is something that you need to learn in Open-Source as it is used eveywhere in Open-Source and you will get to learn it at some point in Google Code-in.

As for Github it is the largest git hosting website in the current Date. In simple terms it is the place hosting the largest number of Open Source Projects often called the developers Hub. So if you want to work on a wide variety of Open Source projects Github is the place where you will find them. The smart Octocat logo is the most seen thing that you will become familiar with as you become an Open source developer.

Finally A GSoCer!

Finally! I successfully passed¬†the Final Evaluation! Just can’t mention how happy I am! As I woke up on the Tuesday morning on August 30th I found this mail from gsoc

final

And I was just feeling great with a sense of accomplishment and that the hardwork I did for the last 3 months brought results! It was a great experience contributing to open source all this time and now I have considerable amount of contribution.

Though I was quite sure about my evaluation as my mentor Jeff¬†already appreciated my work in a comment in my Final wrap up note prior to the official¬†result declaration and I found he wrote the same thing in my Final evaluation as well ūüôā

Thanks to my mentors Jeff, Liz, Stevie, Bryan and David and the entire PublicLab team! It  was a great experience working with all of them. They were really helpful working with all of the GSoC students and providing regular feedback. I hope I get to work with them more. We will be having a video call with all the gsoc students and mentors soon. They call it Openhour and it is a kind of online seminar that they have in the beginning of each month and the September Openhour is dedicated to the gsoc students. Excited for that!

This is just the starting of something good and I hope I can contribute more to open source and learn new stuffs and share my experiences here!

And as the famous poem by Robert Frost says:

And miles to go before I sleep,
And miles to go before I sleep !

The PublicLab Rich Editor

The twelfth week marker! The beginning of the end! GSoC 2016 will soon come to an end. I am almost done with my work on the Q & A system though some work is still left as per my timeline goals. The only major work that remains is integrating the PublicLab Rich Editor for use in the Q & A system.

The PublicLab Rich Editor¬†is a separate project in PublicLab¬†that my mentor Jeffrey Warren¬†has been working on. It is an Editor that supports both markdown and WYSIWYG content. It is to be integrated in the publiclab.org¬†website for posting content very soon. As a part of Q & A project I thought of working on it a bit and contribute as much as possible. It wasn’t initially included in the timeline when I initially made my proposal but later I modified my timeline to include it as it really seemed interesting.

The  PublicLab Rich Editor is a s a general purpose, modular JavaScript/Bootstrap UI library for rich text posting, which provides an author-friendly, minimal, mobile/desktop (fluid) interface for creating blog-like content, designed for publicLab.org. It uses grunt for packaging and compilation. It has a rich text editor based on the Woofmark library and an autocomplete  feature  supported with horsey library. It uses jasmine as its testing framework.

Since I am not familiar with Nodejs or npm¬†it is a bit tough for me to understand its modular structure. I still didn’t make any contribution to the Rich Editor. But I will likely make some contribution as the program wraps up. Have to learn Nodejs for this. I also have to work on fixes that come up during the last week as things are going to be deployed in the live site.

Also my team mates who were working along with me in the same website but different project are doing some awesome work and there is big merge on the Search project that is coming up. So things are getting busy in plots2 in the upcoming week. Stay tuned as the GSoC 2016 comes to an end!

Modified Views for publiclab.org – Expanded Q & A Project

The end of the eleventh week in GSoC 2016 and when I look back I am amazed to see the amount of contribution that I made to plots2. For the last couple of weeks I  have been working on designing the interface for some pages, mainly the changes due to the Q & A system. Here are what I have been working on till now

  1. Add a Recently Answered tab in questions landing page that lists out the recently answered questions
  2. Add Q & A to user Profile that would list the questions asked and answered by any user
  3. Create a a distinct sidebar for questions
  4. Add a tag based sort functionality for questions. This would enable filtering questions based on tags
  5. Add a separate question tab in tags page. Tags page contained research notes, wiki and maps earlier
  6. Make it easier to search and ask questions from the questions page by improving the Search/Ask question field that I made earlier.
  7. Finally add links for Questions page in the website header and also put links to Question page and Ask question page in various pages like the dashboard, tags page etc.

This is going to be long PR and I am still working on it; it is nearly completed. Just some little design changes and modifications are needed.

Apart from these there were some important issues that I had to take care while making these changes. I had to distinguish between research notes and questions since questions in plots2 were actually notes marked with a question:topic power tag. So I had to list out research note and questions separately. I made two methods .research_notes() and .questions() in the DrupalNode model that would extract research_notes and questions separately.

Apart from these there were many small design changes that I had to make alongside. Here are some of the screenshots of the pages. They are likely to  be changed in future.

Here is how the questions page looks now when you go to the /questions url

questions

Here is how the content of the questions section in profile page would look. You can see this in the /profile/:username url

user_profile

And here is how the questions will be listed in the /tags/:tagname url

tags

You can find my ongoing work on plots2 PR #628

 

Creating Custom Rake Tasks

As I write this the tenth week in GSoC 2016 is over and it’s the end of the month now. I am a little late with my timeline mostly because I am doing work on the web interface for some pages and things need ¬†to be changed over and over to fit the design choices. But in the mean time I got to learn something new. We had to run two separate rake tasks for running rails tests and JavaScript tests. So there was an issue to run all tasks with one rake task for ease. This is where I learnt about creating custom Rake tasks.

Rake is a Make-like program implemented in Ruby. It comes as a Rubygem and has to be included in your Gemfile during development. Custom Rake tasks start with a namespace followed by the task name separated by a colon. They reside in the lib/tasks directory of a rails app and they go in a file with a .rake extension.

Here is the railsguides documentation on How to create  Custom Rake tasks. Here is this good blog post by ANDREY KOLESHKO while I learnt about writing rake tasks. As you can see there you can either directly write the task in the namespace.rake (remeber to name the file same as the namespace ) file in the lib/tasksdirectory or you can use the rails generator to create the task file

$ rails g task my_namespace my_task

This will create a lib/tasks/my_namespace.rake file where you can write your task.

I needed to run the two tasks rake test and rake spec:javascript from this task. So basically I had to run rake tasks from within a rake task. Here is a stackoverflow answer that answers it perfectly! Know the difference between execute, invoke and reenable.

Here is a custom task I wrote. It can run using rake test:all


# rake test:all
namespace :test do
desc "Run rails and jasmine tests"
task :all do
puts "Running Rails tests"
Rake::Task["test"].execute
puts "Running jasmine tests headlessly"
Rake::Task["spec:javascript"].execute
end
end

view raw

test.rake

hosted with ❤ by GitHub

You can find this in the commit e90fe of plots2. I would be probably posting about the work on design changes in my next post.