Browsing:

Month: January 2015

Report server log file eating up diskspace – ReportServerTempDb

I got a 400Gb log file for my birthday today.

This 400gb made me mad, It’s just a temp db. And all processes are down this morning. Looking around on the internet, this seems to be a common known problem, some even  state this problem is acknowledged by Microsoft.i can be.. engineer

After reading some blogs on possible solutions. I decided I need to solve the disk problem first. First checks you need to do before you start the troubleshoot:

You need to rule out that you didn’t bring this over yourself, the ReportTempDb needs to be in Simple mode, this means there are no logbackups and the it should automatically reclaim space in your logfiles, well, this is how it should work. But in this case the logspace keeps growing. Indicating something is wrong with the reportserver, when it already is in Simple mode.

Also It could be worthwhile to check which reports are costing you, since not only your log space is invaded by report servers, your CPU is probably a victim too..

SELECT 
	sd.ReportPath,
	COUNT(s.SegmentId) as CountOfSegmentId
FROM 
	[ReportServerTempDB].dbo.Segment s 
INNER JOIN 
	[ReportServerTempDB].dbo.ChunkSegmentMapping m 
ON 
	m.SegmentId = s.SegmentId
INNER JOIN 
	[ReportServerTempDB].dbo.SegmentedChunk c 
ON 
	c.ChunkId = m.ChunkId
INNER JOIN 
	[ReportServerTempDB].dbo.SessionData sd 
ON 
	sd.SnapshotDataID = c.SnapshotDataId 
GROUP BY	
	sd.ReportPath
ORDER BY 
	CountOfSegmentId DESC;

After these checks, we have 2 routes to go, but today we’ll will start with the quick and dirty one, solve the disk shortage, since all processes are put to a stop due to this ‘Stay Puft’ logfile. We need to shrink this beast, I should tell you, shrinking your database is pure evil and risk taking. You should never ever ever do this! …Unless even Microsoft says it’s ok. And it’s just tempdata. It should’ve just deleted itself.

First, we are gonna run a full backup of the ReportserverTempDb, to an external location, ofcourse. Once this is done, we canstart with the clean up of the logfile.

Bring your database level to full mode, this allows you to alter the filegroups.

Right click tthe ReportServerTempDb database Go to the option shrink –> files and check the available free space and check the release unused space button.

This should give you some space, in my case, 400Gb, whoohoo, Dobby is free! Don’t forget to put your ReportServertempDb back into simple mode!

You could also shrink the log file with the ‘shrink file’ option, but like I said earlier, it’s better to avoid this option, because it could lead to faulty logfiles when you need to recover. But I case of this ReportServerTempDb the risk is low.

Now that we have got some diskspace back, we need to dig deeper to find out the cause of this log file eating up all your precious disk space. Next topic will be how we can fix this Stay Puft log file growth.


Beginning Node.js – Testing – part 5

Testing for the newbie

To test your Node API’s you will need to install the following dependencies:

  • Mocha. Mocha is a library to run tests, AKA a test runner, or test infrastructure. Install it globally, like so: npm install -g mocha.
  • Chai. With Chai you can use “assert that.. ” or “should be.. ” or “expect that.. “. Depends on the test philosophy you are using (BDD, TDD and the likes)
  • Superagent. Superagent is a library that can test a web api, like so: superagent.get(‘http://localhost:3001/food’)

Of course there are alternatives (Jasmine, Should ..) but in my humble experience the above 3 work just fine. At first I was wondering why I needed so many libraries but I guess this all about the freedom to mix and match the modules you like the most. Which is quite nice.

Test the API

This is an example test to the ‘list’ function of all the food. Does the API return a JSON array? Well, it should, so I expect it to be.

First, Create a folder named ‘test’
Then put a file in it. (e.g. test1.js), with these contents:

var superagent = require('superagent');
var chai = require('chai');
var expect = chai.expect;

describe("Food JSON api", function() {  

  it("should return a json array", function(done) {
    superagent.get('http://localhost:3001/food')
      .end(function(err, res) {
        expect(err).to.eql(null);
        expect(res.status).to.equal(200);
        expect(res.body.success).to.equal(true);
        expect(res.type).to.equal("application/json");
        done();
      });
  });

}

Now run the test with the command: mocha.

[jacqueline@pc01 Restaurant]$ mocha

  Food JSON api
    ✓ should return a json array 


  1 passing (39ms)

Another test

With SuperAgent you can also test a POST action (be sure not to use your production db, but more about different environments later).

it("should post a new piece of food", function(done) {
    superagent.post('http://localhost:3001/food/')
      .send({
        name: "awesome food in test db from env", description: "awesome food description", price: "16,00"
      })
      .end(function(err, res) {
        expect(err).to.eql(null);
        expect(res.body._id).to.not.be.eql(null);
        id = res.body._id;
        done();
      });
  });

Run the test:

[jacqueline@pc01 Restaurant]$ mocha


  Food JSON api
    ✓ should return a json array 
    ✓ should post a new piece of food 


  2 passing (38ms)

So, this was just a wee intro to testing web API’s. There is lots more to discover about this subject!

I just put the code online at github. This is still a work in progress though.


Beginning Node.js – REST API with a Mongodb backend (refactoring) – part 4

Refactoring

This code in the entry point main.js in the former post is a bit verbose. If an application becomes more complex you don’t really want to put all your logic in one file. Best practice is to keep thing small, by making every program a proxy.

Disclaimer: my blog posts may not be best practice for everybody in every situation. The code I share however works for me and gets the job done. I strive to keep everything as simple as possibe.

Model, View, Controller

In this post we will create a Model and a Controller (no Views because this is still just a REST API without a frontend).
Finally we’ll create the routes.

The main.js file requires the routes, the routes require the controller, the controller requires the model.

Yep, that should do.

Create a model

So let’s clean out the mess. Create a new folder named ‘app’. From the main.js file we will put the ‘Food’ model into a separate file.

I will save the file as model.food.js.

var mongoose = require('mongoose'),
    Schema = mongoose.Schema;

var FoodSchema = new Schema({
    name: String,
    description: String,
    price: String
});

mongoose.model('Food', FoodSchema);
module.exports = mongoose.model('Food');

‘Module.exports’ wraps everything so that later on you can “require” this module and use the “new” operator to create a new instance of the Food object type, which is done in the controller in the next paragraph.

Create a controller

I put the CRUD into a controller (per this example).
I will save the file as controller.food.js
Notice how I ‘require’ the Model in the first line.

var Food = require('./model.food.js');

exports.create = function (req, res) {
    var food = new Food(req.body);
    console.log(food);
    food.save(function (err) {
        if (err) {
            return res.send({
                message: err
            });
        } else {
            res.send({
                success: true,
                food: food
            });
        }
    });
};

exports.list = function (req, res) {
    Food.find(function (err, data) {
        if (err) {
            res.send(err);
        }
        res.json({
            success: true,
            food: data
        });
    });
};


exports.findOne = function (req, res) {
    Food.findOne({
        _id: req.params.id
    }, function (error, response) {
        if (error) {
            res.send(error);
        } else {
            res.send({
                success: true,
                food: response
            });
        }
    });
};

exports.findByName = function (req, res) {
    Food.findOne({
        name: req.params.name
    }, function (error, response) {
        if (error || !response) {
            res.status(404).send({
                status: 401,
                message: 'not found'
            });
        } else {
            res.send({
                success: true,
                food: response
            });
        }
    });
}

Here I am basically wrapping methods around the Food object, using exports.methodname.
In case you are wondering (I was, and have to admit still am wondering about this sometimes), this article explains the difference between ‘module.exports’ and ‘exports’ as follows: ‘module.exports’ is the real deal, exports is just its little helper. For now, I am OK with that. Let’s just get things done already. 🙂

Separate the routes

Here the controller.food.js file is required.
I will save the file as routes.food.js.

//mongodb
var foodstuff = require('./controller.food');

module.exports = function (app) {
    app.route('/food').post(foodstuff.create);
    app.route('/food').get(foodstuff.list);
    app.route('/food/:id').get(foodstuff.findOne);
    app.route('/foodname/:name').get(foodstuff.findByName);
};

Well then, we’re done. Now let’s give main.js a haircut.

Refactor main.js

So, we now have a small and clean main.js file.

var express = require('express');
var app = express();
var bodyparser = require('body-parser');
var mongoose = require('mongoose');

app.use(bodyparser.urlencoded({
  extended: true
}));

app.use(bodyparser.json());

mongoose.connect('mongodb://localhost:27017/restaurant');

require('./app/routes.food')(app);

app.use(express.static(__dirname + '/public'));

app.set('port', process.env.PORT || 3001);
app.listen(app.get('port'));
console.log("the server is running on http://localhost:" + app.get('port'));

If you followed along, the directory structure looks like this:

.
├── app
│   ├── controller.food.js
│   ├── model.food.js
│   └── routes.food.js
├── main.js
├── node_modules
│   ├── body-parser
│   ├── express
│   └── mongoose
├── package.json
└── public
├── index.html

What’s next?

Next up is authentication testing.


Create and restore a database with multiple secondary data files

SQL databases are all about performance, if databases grow very large, it might be a good idea to spread your files across multiple disks.

A database can consist of three files, .mdf, .ldf and .ndf, in which .ndf is a user defined secondary data file, these files can be created to split up a fast growing .mdf. For example you create 3 files, Data1.ndf, Data2.ndf, and Data3.ndf spread on three disk drives and assigned to a filegroup e.g.’fg1′. Tables can be created specifically on the filegroup ‘fg1’. The queries for data from the table can be spread across multiple disks; this will improve your database performance.

Here’s a script to create a database with multiple filegroups:

USE [master]
GO

/****** Object:  Database [AdventurePark]    Script Date: 01/01/2015 21:13:57 ******/
IF  EXISTS (SELECT name FROM sys.databases WHERE name = N'AdventurePark')
DROP DATABASE [AdventurePark]
GO

USE [master]
GO

/****** Object:  Database [AdventurePark]    Script Date: 01/01/2015 21:13:57 ******/
CREATE DATABASE [AdventurePark] ON  PRIMARY 
( NAME = N'AdventureParkPrimary', FILENAME = N'D:\DATA\AdventurePark.mdf' , SIZE = 5120KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10%), 
 FILEGROUP [Data]  DEFAULT 
( NAME = N'AdventureParkData', FILENAME = N'D:\DATA\AdventurePark_1.ndf' , SIZE = 4896KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10%), 
 FILEGROUP [Files2008] 
( NAME = N'AdventureParkFiles2008_1', FILENAME = N'E:\DATA\AdventurePark_2.ndf' , SIZE = 4896KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10%), 
( NAME = N'AdventureParkFiles2008_2', FILENAME = N'E:\DATA\AdventurePark_3.ndf' , SIZE = 4896KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10%), 
 FILEGROUP [Files2009] 
( NAME = N'AdventureParkFiles2009_1', FILENAME = N'E:\DATA\AdventurePark_4.ndf' , SIZE = 4896KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10%), 
( NAME = N'AdventureParkFiles2009_2', FILENAME = N'E:\DATA\AdventurePark_5.ndf' , SIZE = 4896KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10%), 
 FILEGROUP [Files2010] 
( NAME = N'AdventureParkFiles2010_1', FILENAME = N'E:\DATA\AdventurePark_6.ndf' , SIZE = 4896KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10%), 
( NAME = N'AdventureParkFiles2010_2', FILENAME = N'E:\DATA\AdventurePark_7.ndf' , SIZE = 4896KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10%), 
 FILEGROUP [FIles2011] 
( NAME = N'AdventureParkFiles2011_1', FILENAME = N'E:\DATA\AdventurePark_8.ndf' , SIZE = 4896KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10%), 
( NAME = N'AdventureParkFiles2011_2', FILENAME = N'E:\DATA\AdventurePark_9.ndf' , SIZE = 4896KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10%), 
 FILEGROUP [Files2012] 
( NAME = N'AdventureParkFiles2012_1', FILENAME = N'E:\DATA\AdventurePark_10.ndf' , SIZE = 4896KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10%), 
( NAME = N'AdventureParkFiles2012_2', FILENAME = N'E:\DATA\AdventurePark_11.ndf' , SIZE = 4896KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10%), 
 FILEGROUP [Files2013] 
( NAME = N'AdventureParkFiles2013_1', FILENAME = N'E:\DATA\AdventurePark_12.ndf' , SIZE = 4896KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10%), 
( NAME = N'AdventureParkFiles2013_2', FILENAME = N'E:\DATA\AdventurePark_13.ndf' , SIZE = 4896KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10%), 
 FILEGROUP [Files2014] 
( NAME = N'AdventureParkFiles2014_1', FILENAME = N'D:\DATA\AdventurePark_14.ndf' , SIZE = 4896KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10%), 
( NAME = N'AdventureParkFiles2014_2', FILENAME = N'D:\DATA\AdventurePark_15.ndf' , SIZE = 4896KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10%)
 LOG ON 
( NAME = N'AdventurePark_log', FILENAME = N'L:\LOGS\AdventurePark_16.ldf' , SIZE = 4896KB , MAXSIZE = 2048GB , FILEGROWTH = 10%)
GO

ALTER DATABASE [AdventurePark] SET COMPATIBILITY_LEVEL = 100
GO

IF (1 = FULLTEXTSERVICEPROPERTY('IsFullTextInstalled'))
begin
EXEC [AdventurePark].[dbo].[sp_fulltext_database] @action = 'enable'
end
GO

ALTER DATABASE [AdventurePark] SET ANSI_NULL_DEFAULT OFF 
GO

ALTER DATABASE [AdventurePark] SET ANSI_NULLS OFF 
GO

ALTER DATABASE [AdventurePark] SET ANSI_PADDING OFF 
GO

ALTER DATABASE [AdventurePark] SET ANSI_WARNINGS OFF 
GO

ALTER DATABASE [AdventurePark] SET ARITHABORT OFF 
GO

ALTER DATABASE [AdventurePark] SET AUTO_CLOSE OFF 
GO

ALTER DATABASE [AdventurePark] SET AUTO_CREATE_STATISTICS ON 
GO

ALTER DATABASE [AdventurePark] SET AUTO_SHRINK OFF 
GO

ALTER DATABASE [AdventurePark] SET AUTO_UPDATE_STATISTICS ON 
GO

ALTER DATABASE [AdventurePark] SET CURSOR_CLOSE_ON_COMMIT OFF 
GO

ALTER DATABASE [AdventurePark] SET CURSOR_DEFAULT  GLOBAL 
GO

ALTER DATABASE [AdventurePark] SET CONCAT_NULL_YIELDS_NULL OFF 
GO

ALTER DATABASE [AdventurePark] SET NUMERIC_ROUNDABORT OFF 
GO

ALTER DATABASE [AdventurePark] SET QUOTED_IDENTIFIER OFF 
GO

ALTER DATABASE [AdventurePark] SET RECURSIVE_TRIGGERS OFF 
GO

ALTER DATABASE [AdventurePark] SET  DISABLE_BROKER 
GO

ALTER DATABASE [AdventurePark] SET AUTO_UPDATE_STATISTICS_ASYNC OFF 
GO

ALTER DATABASE [AdventurePark] SET DATE_CORRELATION_OPTIMIZATION OFF 
GO

ALTER DATABASE [AdventurePark] SET TRUSTWORTHY OFF 
GO

ALTER DATABASE [AdventurePark] SET ALLOW_SNAPSHOT_ISOLATION OFF 
GO

ALTER DATABASE [AdventurePark] SET PARAMETERIZATION SIMPLE 
GO

ALTER DATABASE [AdventurePark] SET READ_COMMITTED_SNAPSHOT OFF 
GO

ALTER DATABASE [AdventurePark] SET HONOR_BROKER_PRIORITY OFF 
GO

ALTER DATABASE [AdventurePark] SET  READ_WRITE 
GO

ALTER DATABASE [AdventurePark] SET RECOVERY FULL 
GO

ALTER DATABASE [AdventurePark] SET  MULTI_USER 
GO

ALTER DATABASE [AdventurePark] SET PAGE_VERIFY CHECKSUM  
GO

ALTER DATABASE [AdventurePark] SET DB_CHAINING OFF 
GO

 

Need a quick restore of a database with the data spread across multiple disks onto a new server, here’s a simple script I wrote. Just replace name and location and you’re ready to go.

USE master
GO

--ALTER DATABASE AdventurePark
--SET SINGLE_USER
----This rolls back all uncommitted transactions in the db.
--WITH ROLLBACK IMMEDIATE
--GO

RESTORE DATABASE [AdventurePark] FROM  DISK = N'\\Teletraan\sqlbackup$\AdventurePark\FULL_BU20150101_000001.bak' WITH  FILE = 1,  
MOVE N'AdventureParkPrimary'     TO N'F:\SQL AdventurePark\AdventurePark.mdf',  
MOVE N'AdventureParkData'        TO N'D:\ARCHIVEDATA\AdventurePark_1.ndf',  
MOVE N'AdventureParkFiles2008_1' TO N'D:\ARCHIVEDATA\AdventurePark_2.ndf',  
MOVE N'AdventureParkFiles2008_2' TO N'D:\ARCHIVEDATA\AdventurePark_3.ndf',  
MOVE N'AdventureParkFiles2009_1' TO N'D:\ARCHIVEDATA\AdventurePark_4.ndf',  
MOVE N'AdventureParkFiles2009_2' TO N'D:\ARCHIVEDATA\AdventurePark_5.ndf',  
MOVE N'AdventureParkFiles2010_1' TO N'D:\ARCHIVEDATA\AdventurePark_6.ndf',  
MOVE N'AdventureParkFiles2010_2' TO N'D:\ARCHIVEDATA\AdventurePark_7.ndf',  
MOVE N'AdventureParkFiles2011_1' TO N'D:\ARCHIVEDATA\AdventurePark_8.ndf',  
MOVE N'AdventureParkFiles2011_2' TO N'E:\ARCHIVEDATA\AdventurePark_9.ndf',  
MOVE N'AdventureParkFiles2012_1' TO N'E:\ARCHIVEDATA\AdventurePark_10.ndf',  
MOVE N'AdventureParkFiles2012_2' TO N'E:\ARCHIVEDATA\AdventurePark_11.ndf',  
MOVE N'AdventureParkFiles2013_1' TO N'E:\ARCHIVEDATA\AdventurePark_12.ndf',  
MOVE N'AdventureParkFiles2014_1' TO N'D:\SQL DATA\AdventurePark_13.ndf',  
MOVE N'AdventureParkFiles2014_2' TO N'D:\SQL DATA\AdventurePark_14.ndf',  
MOVE N'AdventurePark_log' TO N'F:\SQL LOGS\AdventurePark_15.ldf',  
NOUNLOAD,  REPLACE,  STATS = 10
GO