Tables in HTML

In past tutorials we’ve looked at ways to present text, images and create links in HTML. In this tutorial we’ll look at how to create a simple table to use in your web projects.

What tables look like

This is a table. It has two rows.
It has two cells in each row. We’ll look at how to create this.

Table tag

The first thing we need to do to create a table is create a table tag. Tables in HTML are elements that need to have closing tags for each opened tag, so be careful. With the table tag, we’re just telling the browser that we’re going to be starting a table when we open it and ending a table when we close it.

<table>
   <!--We need to add more tags here to make our table work--!>
</table>

This code won’t display anything yet, we’ll need to specify the number of rows and cells (data) and add content to our cells to have a proper table. The text in the middle of our table tags is a comment. We’ll be looking at HTML comments in greater detail in our next HTML Basics tutorial.

Rows

Once we have our table tags, we’ll need to nest tags to specify the number of rows we want in our table. Luckily the tags are tr, for table row, and easy to remember.

<table>
   <tr>
      <!--We'll add cells here--!>
   </tr>
   <tr>
      <!--We'll add cells here--!>
   </tr>
</table>

Cells/Data

Now we just need to add the cells into the rows of our tables and we’ll be ready to go. We’ll be nesting the table data tags inside the row tags in our table to create a working table.

<table>
   <tr>
      <td>Here's their first cell in the first row</td>
      <td>The second cell in the first row</td>
   </tr>
   <tr>
      <td>Here's their first cell in the second row</td>
      <td>The second cell in the second row</td>
   </tr> 
</table>

Once we add the cells in, our table looks like this:

Here’s their first cell in the first row The second cell in the first row
Here’s their first cell in the second row The second cell in the second row

Common problems

Don’t forget that you’ll need to close each of the tags you use to create your table. The tags also have to be nested within each other in the right order. Data tags sit inside row tags which rest inside the table tags.

Have questions about tables in HTML? Ask in the comments below or find me on Twitter.

Read More

Easy, Lazy SEO

Last Saturday I joined some incredibly talented speakers, dedicated organizers and lovely WordPressers for WordCamp Manchester. The event was hosted at Manchester Metropolitan University Business School, which is one of my favorite venues for medium sized conferences. I gave a talk on Easy, Lazy SEO.

For attendes who wanted information on blocking the referral spammer Semalt, I recommend Logorrhoea‘s how to blog post.

I’ve got to admit that this talk was a bit too basic for the WordCamp audience. I had planned for an audience of SEO newbies and was delighted to find a really intelligent audience that was well informed on SEO. I think I’m going to start bringing both my introductory level and intermediate level slides with me if I give this talk in the future. Luckily a well informed audience allowed for a really robust discussion session following the quick race through the slides.

Many thanks to organizer Jenny Wong, all the volunteers and the participants that made this event such a smashing success.

Read More

Ignite Liverpool

Last night I gave a short talk on imposter syndrome at Ignite Liverpool. They’ve got a really great group of volunteers, speakers and attendees and I can’t recommend enough that folks in the area make time for their quarterly events. I’ve included slides from my talk and a list of resources and recommended reading.

Images from Mourge File free stock images, Redditor Gabryelx created the Nyan pug and I’ve included the uncited meme image “I have no idea what I’m doing”. If anyone knows where who created this image, please do let me know.

Valarie Young’s advice on combating imposter syndrome comes from her 2010 Forbes interview.

The study first capturing the Dunning-Kreuger effect, “Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments.” is available through PyschNET.

I’ll be adding the video of this talk as soon as it’s made available through the Ignite Liverpool YouTube account.

Read More

Ladies Who Code Birmingham

Monthly events aimed at programmers who identify as female are now being held in Birmingham. It doesn’t matter if you’re a professional developer or looking to learn to write your first line of code, Ladies Who Code Birmingham is open to all skill levels.

We’ll be supplying cake and good company, we’re looking for some brilliant women to provide the great ideas to make Ladies Who Code a valuable resource here in the West Midlands.

cake

The next Ladies Who Code Birmingham event will be Monday the 19th of May from 6:30pm at the Innovation Birmingham Campus. We’ll be asking for short lightning talks from participants so if you have a great project, pitch or tool you want to share with us, we’re happy to give you the stage. Don’t worry if you’re shy, we’re not planning on bullying anyone into speaking. This is only our second meeting, so we’ll also be looking for direction on what kind of events you would like to see in the future. Further details available at Meetup.com.


Have questions? Need directions? Want to request specific pastries? Drop me an email at jessica(at)closetoclever.com or shout at me on Twitter.

Can’t attend? Let us know how we can make the event more accessible in the comments below.

Read More

Transitioning into a Technical Role

On April 29 Innovation Birmingham hosted their first Women in Tech event. I was honored to have been asked to present and look forward to a successful series of similar events to follow. I have included my slides from my talk on moving into a technical role from a non-technical background below.

For anyone who had been interested in attending some of the events I had mentioned in my talk, I’ve listed these below:

Hydrahack
Open Code
Ladies Who Code Birmingham
Silicon Canal
West Midlands Ruby User Group
Tech Wednesday
Hackfrence Brum

Read More

Interview with Semalt.com

Yesterday I posted about Semalt.com’s crawler and their unusual choice not to have their crawlers identify themselves as web crawlers or obey robots.txt, causing heartaches for analytics loving webmasters across the web. Semalt’s manager Alex Andrianov reached out through twitter and offered to answer some of my questions via email. The exchange is included in whole below.

Hi, thanks for taking the time to chat with me in a bit more detail about Semalt. Happy to update my blog post with factual corrections you’re able to provide.You’ve mentioned on twitter that Semalt does not obey robots.txt, further saying that “can’t change it”. Could you explain in a bit more detail what keeps Semalt’s bots from identifying themselves as bots or obeying robots.txt? Is this a talent issue, where your developers haven’t been able to discover the processes to undertake this, or is this part of a business decision on Semalt’s part?Are there plans in the future to have Semalt’s bots identify themselves properly as crawlers and to obey robots.txt?

You also claimed that my comments at http://www.closetoclever.com/semalt-com/ were incorrect, as I was not a Semalt client. Were there any specific factual errors that you would like to address?

Thanks again for taking the time to answer these,

Jessica Rose

 

Hello Jessica RoseThanks for your email.First of all I would like to bring apology on behalf of my company if our bots caused you some difficulties. I can assure you, all the visits on your website were accidental. At this moment our specialists are taking drastic actions to prevent these visits. Thank you for pointing to our service drawbacks. We appreciate your help and it is very important to us.

Our service has been launched quite recently and unfortunately there are still some bugs and shortcomings. Please, respect this fact. We are working hard trying to fix the existing errors and I hope soon our users won’t have any claims.

As you might notice, every user can manually remove their URLs from the Semalt database using Semalt Crawler. Furthermore, our Support center specialists are ready to come to the aid and remove URLs from the base once the website owner submits a request. We consider every single request and guarantee that every user will get a proper respond.

We realize this may bring some inconveniences, but unfortunately at the moment we can’t offer another way of solving this issue.

As for the comment posted on your blog, I believe it’s impossible to evaluate all the pros and cons unless you have the complete picture of the service. Probably once you try to use Semalt features you will change your mind.

Anyway, we thank you for your feedback, since we appreciate every opinion relating Semalt.

Sincerely yours,

Semalt LLC manager , Alex Andrianov

 

Thanks for the response, but would it be possible to have you address my specific questions more directly?1. Are you claiming that your bots’ failure to identify themselves as web crawlers is due to a technical failure?
2. Are you claiming that your bots not obeying robots.txt is due to a technical failure?
3. Do you have plans to make your bots identify themselves as web crawlers?
4. Do you have plans to have your bots comply with robots.txt?Jessie

 

Dear Jessica,I will try to give the most definite answers to your questions. As I mentioned before our service has recently appeared on the web which causes some technical unavailability. Today we upgrade the web scanning process and adjust our robots. Unfortunately sometimes Semalt bots visit random websites, but we do all our best to solve this problem in the shortest possible time.Thank you for your email and interest to Semalt.com service. Your opinion is very important to us.

Sincerely yours,

Semalt LLC manager, Alex Andrianov

I’m not sure that’s answering much. I’m really looking to find out:1. Will your crawlers be respecting robots.txt after your upgrade?
2. Will your crawlers be identifying themselves as web crawlers after your upgrade?Jessie

He hasn’t yet replied to this email, but responded to tweets on the subject:
semalt3
semalt4

What we learned from this exchange:

Nothing, really. There were some vague claims that the problems I’ve listed were “bugs” but no specific addressing of the problems of Semalt bots ignoring robots.txt or failing to properly identify themselves as web crawlers. Apparently several weeks of visits to sites across the web were “accidental”.

Why this is nonsense:

Given how easy creating robots.txt compliant crawlers are, failure of bots to identify themselves as web crawlers or obey robots.txt can only be viewed as a deliberate choice of the designer or gross incompetence. While my technical skills are also substandard, I’m confident that I would be able to put together a simple webcrawler that obeys robots.txt over the weekend (check in on Tuesday, I’ll be posting the results of my efforts). For a professional enterprise who sources data through crawling the web to claim following industry conventions is beyond their technical ability leaves me wondering if they’re fools or liars.

Read More

What is Semalt.com?

If you’re keeping track of your website’s traffic through Google Analytics, you’ve probably noticed referral visits from a website called semalt.com in recent weeks. Semalt is a web crawler designed to gather data for Senmalt’s marketing platform. The visits showing up in your logs are automated programs interacting with your site.

The difference between Semalt.com and reputable crawlers

If you look through your Google Analytics referral data, you’ll notice that the other large web crawlers such as Googlebot, MJ12bot, Rogerbot and Bingbot don’t show up in your logs. Semalt’s crawlers showing up in your traffic logs is unusual because most bots identify themselves as web crawlers and will thus be excluded from your traffic data. This results in skewed traffic data, especially for smaller sites for whom semalt.com vists make up a larger percentage of their traffic.

Semalt also doesn’t respect robot.txt (a easy way for webmasters to keep bots from their sites) instead asking that concerned webmasters seek them out and add themselves to a no-crawl list that Semalt maintains. I reached out to Semalt’s Alex Andrianov on twitter to ask if their crawlers were ignoring robots.txt. He confirmed that Semalt.com’s crawler doesn’t respect robots.txt and claimed that they were unable to have it do so.

Twitter exchange with Alex Andrianov of Semalt.com

How to stop semalt.com from visiting your site

As Alex suggests, you can submit your site to Semalt to ask for removal from their crawl at their site though there’s no way to tell if they’ll act on this request. As I’m inclined to distrust crawlers that don’t respect robot.txt I’ve opted to block their access to my site through .htaccess as outlined by logorrhoea.net.

Update 15/4/14: Semalt manager Alex Andrianov suggested that parts of this post may be factually incorrect as I failed to note I am not a Semalt customer. I would like to state that I am not a Semalt client but that I stand by the information listed here are true and welcome any factual corrections.

semalt2

Read More

Birmingham Open Code

From April the 8th there will be a weekly event for collaborative programming study sessions in Birmingham. We’ll be meeting in the Woodman Pub from 6 pm.

Birmingham Open Code is designed to provide a peer supported, mixed level learning environment. Programmers and aspiring programmers working in any language are welcome. The weekly schedule is designed to create a casual environment where learners can drop in for social learning as needed, without feeling the need to make every event. We’re looking to keep these study sessions as inclusive as possible. You’re welcome no matter your skill level, level of education, age, gender, race, sexual identity, or sexual orientation.

If you’re an established programmer bring your laptop and be ready to help out newbies while socializing with your peers. If you’ve never programmed before and want to start, bring some great questions to get you started in the right direction.

There are also a number of hands on workshops in a range of technologies and experience levels in the pipeline. These may be added as monthly events to supplement the Open Code study sessions. Currently workshops in introductory and advanced Python, technical writing and Ruby have been proposed. To lead your own workshop, get in touch at jessica(at)closetoclever.com.

The space is handicap accessible and close to both Birmingham’s Moor Street and New Street stations.

Read More

SEO Basics: Google Penalties

We’ve already looked at why links are important in SEO. Having relevant, high quality sites linking to your content can offer search engines a vote of confidence about your content. In an effort to keep webmasters from creating spammy or valueless links to artificiality inflate the value of their content, Google has released algorithms to detect unnatural linking. The Penguin updates have been designed to find unnatural linking patterns and to automatically adjust the rank of sites which have been found to violated their guidelines. The negative adjustments are called penalties.

Types of Google penalties

Google penalties can be assigned automatically through Google’s algorithms or can be assigned manually. With a manual penalty, you’ll be alerted to the penalty within Google’s Webmaster tools with details about the penalty and examples of the guideline violations found. Registering your site with Webmaster Tools is a great idea, regardless of your risk for penalties as it will offer you valuable information about your site.

Algorithmic penalties are assigned automatically and aren’t accompanied by a notice of penalty. The best way to determine if your site might have been hit by an algorithmic penalty is to look for dramatic changes in search rankings and traffic that happen independent to changes in the site.

How to avoid penalties

If you’re the only one working on your site, follow the Guidelines by Google in creating content and links to your site. If you’re working on your site alongside others, be aware of the work they’re doing and make sure everyone involved is aware of the guidelines and risks associated with not adhering to them. If you’re outsourcing your SEO to a third party, be aware of what work they’re doing on your behalf. Monitoring your new backlinks in Google Webmaster Tools or a third party service like Moz or Majestic SEO can help you track links being built for your site before they cause any problems. Majestic will let you check all the backlinks of your own site for free, making it easy to keep tabs on your risk levels at no extra cost.

Have questions about Google penalties? Ask in the comments below or find me on Twitter.

Read More