TinyOS and nesc. Ø TinyOS: OS for wireless sensor networks. Ø nesc: programming language for TinyOS.

Similar documents
Critiques. Ø Critique #1

Final Review. Chenyang Lu. CSE 467S Embedded Compu5ng Systems

Operating Systems. Chenyang Lu

CS 5523: Operating Systems

Cyber-Physical Systems Scheduling

Last Time. Bit banged SPI I2C LIN Ethernet. u Embedded networks. Ø Characteristics Ø Requirements Ø Simple embedded LANs

Servilla: Service Provisioning in Wireless Sensor Networks. Chenyang Lu

Processes. Criteria for Comparing Scheduling Algorithms

CSE 520S Real-Time Systems

Adaptive QoS Control for Real-Time Systems

Real-Time CORBA. Chenyang Lu CSE 520S

Philips Lifeline. Ø Chenyang Lu 1

Internet of Things Wireless Sensor Networks. Chenyang Lu

CS 2461: Computer Architecture I

Concurrent Programing: Why you should care, deeply. Don Porter Portions courtesy Emmett Witchel

Real- Time Wireless Control Networks for Cyber- Physical Systems

CS 5523 Operating Systems: Intro to Distributed Systems

Real-Time Scheduling Single Processor. Chenyang Lu

New features in Oracle 11g for PL/SQL code tuning.

Ø Project Description. Ø Design Criteria. Ø Design Overview. Ø Design Components. Ø Schedule. Ø Testing Criteria. Background Design Implementation

CS 5523 Operating Systems: Synchronization in Distributed Systems

Implementing Domain Specific Languages using Dependent Types and Partial Evaluation

File Systems: Fundamentals

Real-Time Wireless Control Networks for Cyber-Physical Systems

Downloaded from: justpaste.it/vlxf

Case 2:18-cv JRG Document 1 Filed 08/01/18 Page 1 of 26 PageID #: 1

LPGPU. Low- Power Parallel Compu1ng on GPUs. Ben Juurlink. Technische Universität Berlin. EPoPPEA workshop

Installation Instructions HM2085-PLM Strain Gage Input Module

Voting through Power Line Communication with Biometric Verification

Last Time. Embedded systems introduction

United States District Court, D. Delaware. LUCENT TECHNOLOGIES, INC. Plaintiff. v. NEWBRIDGE NETWORKS CORP. and Newbridge Networks, Inc. Defendants.

Cyber-Physical Systems Feedback Control

Case3:10-cv JW Document81 Filed06/12/12 Page1 of 23 SAN FRANCISCO DIVISION

Case 1:17-cv Document 1 Filed 12/11/17 Page 1 of 17 IN THE UNITED STATES DISTRICT COURT FOR THE WESTERN DISTRICT OF TEXAS AUSTIN DIVISION

Cloud Tutorial: AWS IoT. TA for class CSE 521S, Fall, Jan/18/2018 Haoran Li

Google App Engine 8/10/17. CS Cloud Compu5ng Systems--Summer II 2017

Computer Power Management Rules. Ø Jim Kardach, re-red chief power architect, Intel h6p://

SMS based Voting System

A Micro-Benchmark Evaluation of Catamount and Cray Linux Environment (CLE) Performance

HPCG on Tianhe2. Yutong Lu 1,Chao Yang 2, Yunfei Du 1

irobot Create Setup with ROS and Implement Odometeric Motion Model Welcome Lab 4 Dr. Ing. Ahmad Kamal Nasir

CS 5523: Operating Systems

IBM Cognos Open Mic Cognos Analytics 11 Part nd June, IBM Corporation

Objec&ves. Usability Project Discussion. May 9, 2016 Sprenkle - CSCI335 1

Performance & Energy

Coverage tools Eclipse Debugger Object-oriented Design Principles. Oct 26, 2016 Sprenkle - CSCI209 1

IC Chapter 15. Ballot Card and Electronic Voting Systems; Additional Standards and Procedures for Approving System Changes

DevOps Course Content

Wednesday, January 4, electronic components

Configuring MST (802.1s)/RSTP (802.1w) on Catalyst Series Switches Running CatOS

COMP 635: WIRELESS & MOBILE COMMUNICATIONS COURSE INTRODUCTION. Jasleen Kaur. Fall 2017

Implementation of aadhar based voting machine using

Installation of InfraStruXure for Medium Data Centers with an Emergency Power Off Circuit

ETSI TS V8.3.0 ( )

ETSI TS V2.2.1 ( )

VLSI Design I; A. Milenkovic 1

Uninformed search. Lirong Xia

Hoboken Public Schools. PLTW Introduction to Computer Science Curriculum

Ruckus SmartZone 100 and Virtual SmartZone Essentials SNMP MIB Reference

Liveness: The Readers / Writers Problem

CSCI 325: Distributed Systems. Objec?ves. Professor Sprenkle. Course overview Overview of distributed systems Introduc?on to reading research papers

The optical memory card is a Write Once media, a written area cannot be overwritten. Information stored on an optical memory card is non-volatile.

A Calculus for End-to-end Statistical Service Guarantees

A Bloom Filter Based Scalable Data Integrity Check Tool for Large-scale Dataset

ForeScout Extended Module for McAfee epolicy Orchestrator

Deadlock. deadlock analysis - primitive processes, parallel composition, avoidance

One View Watchlists Implementation Guide Release 9.2

ETSI TS V ( )

DRAFT AUSTRALIAN STANDARD FOR COMMENT

JD Edwards EnterpriseOne Applications

Virtual Memory and Address Translation

This Addendum Number 1 to the above referenced IFB responds to a clarification question asked with the MST response.

Unrestricted Siemens AG 2017

EFFICACIOUS ELECTRONIC VOTING USING BIOMETRY

30 Transformational Design with Essential Aspect Decomposition: Model-Driven Architecture (MDA)

Aspect Decomposition: Model-Driven Architecture (MDA) 30 Transformational Design with Essential. References. Ø Optional: Ø Obligatory:

Paper 9 Tel: Entered: August 17, 2015 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

LICENCE ISSUED UNDER SECTION 24 OF THE INFORMATION AND COMMUNICATION TECHNOLOGIES ACT 2001 (AS AMENDED) Licence No. C.02/2011/001

Learning Systems. Research at the Intersection of Machine Learning & Data Systems. Joseph E. Gonzalez

Cadac SoundGrid I/O. User Guide

Lecture 8: Verification and Validation

Optimization Strategies

Frequency-dependent fading bad for narrowband signals. Spread the narrowband signal into a broadband signal. dp/df. (iii)

Before : LORD JUSTICE GROSS LORD JUSTICE FLOYD and MR JUSTICE ARNOLD Between:

Discourse Obligations in Dialogue Processing. Traum and Allen Anubha Kothari Meaning Machines, 10/13/04. Main Question

Model Act to Permit Continued Access by Law Enforcement to Wire & Electronic Communications

Exploring QR Factorization on GPU for Quantum Monte Carlo Simulation

A procedure to compute a probabilistic bound for the maximum tardiness using stochastic simulation

FM Legacy Converter User Guide

Practical Application of Precision Time Protocol: Campus path measurement, Smart SFP reflectors, IEEE1588 grandmaster clocks, and stuff.

Review: SoBware Development

SMART MOTION C A T A L O G 2009:1

Statement on Security & Auditability

Agreement for iseries and AS/400 System Restore Test Service

RFM-DACNF04-S250KH (FMC DA board) Hardware Reference Manual

LOFA MC536 Programming Manual

Oracle FLEXCUBE Bills User Manual Release Part No E

SMD Type. General Description. Features. Ordering Information. Lithium-ion/Polymer Battery Protection IC DW01+

30 Transformational Design with Essential Aspect Decomposition: Model-Driven Architecture (MDA)

Vote Tabulator. Election Day User Procedures

Transcription:

TinyOS and nesc Ø TinyOS: OS for wireless sensor networks. Ø nesc: programming language for TinyOS. Original slides by Chenyang Lu, adapted by Octav Chipara 1

Mica2 Mote Ø Processor Ø Radio Ø Sensors Ø Power Microcontroller: 7.4 MHz, 8 bit Memory: 4KB data, 128 KB program Max 38.4 Kbps Light, temperature, acceleranon, acousnc, magnenc <1 week on two AA bareries in acnve mode >1 year barery life on sleep modes! 5

Hardware Constraints Severe constraints on power, size, and cost à Ø slow microprocessor Ø low- bandwidth radio Ø limited memory Ø limited hardware parallelism à CPU hit by many interrupts! Ø manage sleep modes in hardware components 6

So5ware Challenges Ø Small memory footprint Ø Efficiency - power and processing Ø Concurrency- intensive operanons Ø Diversity in applicanons & pla]orm à efficient modularity Support evolunon of hardware and so^ware 7

Tradi9onal OS Ø MulN- threaded Ø PreempNve scheduling Ø Threads: ready to run; execunng on the CPU; wainng for data. gets CPU executing preempted gets data needs data ready waiting needs data 8

Pros and Cons of Tradi9onal OS Ø MulN- threaded + preempnve scheduling Ø I/O Preempted threads waste memory Context switch overhead Blocking I/O: waste memory on blocked threads Polling (busy- wait): waste CPU cycles and power 9

Example: Preemp9ve Priority Scheduling Ø Each process has a fixed priority (1 highest); Ø P 1 : priority 1; P 2 : priority 2; P 3 : priority 3. 0 10 20 30 40 50 60 time 10

Example: Preemp9ve Priority Scheduling Ø Each process has a fixed priority (1 highest); Ø P 1 : priority 1; P 2 : priority 2; P 3 : priority 3. P 2 released 0 10 20 30 40 50 60 time 10

Example: Preemp9ve Priority Scheduling Ø Each process has a fixed priority (1 highest); Ø P 1 : priority 1; P 2 : priority 2; P 3 : priority 3. P 2 released P2 0 10 20 30 40 50 60 time 10

Example: Preemp9ve Priority Scheduling Ø Each process has a fixed priority (1 highest); Ø P 1 : priority 1; P 2 : priority 2; P 3 : priority 3. P 2 released P 1 released P2 0 10 20 30 40 50 60 time 10

Example: Preemp9ve Priority Scheduling Ø Each process has a fixed priority (1 highest); Ø P 1 : priority 1; P 2 : priority 2; P 3 : priority 3. P 2 released P 1 released P2 P1 0 10 20 30 40 50 60 time 10

Example: Preemp9ve Priority Scheduling Ø Each process has a fixed priority (1 highest); Ø P 1 : priority 1; P 2 : priority 2; P 3 : priority 3. P 3 released P 2 released P 1 released P2 P1 0 10 20 30 40 50 60 time 10

Example: Preemp9ve Priority Scheduling Ø Each process has a fixed priority (1 highest); Ø P 1 : priority 1; P 2 : priority 2; P 3 : priority 3. P 3 released P 2 released P 1 released P2 P1 P2 0 10 20 30 40 50 60 time 10

Example: Preemp9ve Priority Scheduling Ø Each process has a fixed priority (1 highest); Ø P 1 : priority 1; P 2 : priority 2; P 3 : priority 3. P 3 released P 2 released P 1 released P2 P1 P2 P3 0 10 20 30 40 50 60 time 10

Context Switch process 1 process 2... PC registers CPU memory CSE 467S

Context Switch process 1 process 2... PC registers CPU memory CSE 467S

Context Switch process 1 process 2... PC registers CPU memory CSE 467S

Exis9ng Embedded OS Ø QNX context switch = 2400 cycles on x86 Ø posek context switch > 40 µs Ø Creem - > no preempnon System architecture directions for network sensors, J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, K. Pister. ASPLOS 2000. 12

TinyOS Solu9ons Ø Efficient modularity ApplicaNon = scheduler + graph of components Compiled into one executable Only needed components are complied/loaded Ø Concurrency: event- driven architecture Main (includes Scheduler) Application (User Components) Actuating Sensing Communication Communication Hardware Abstractions Modified from D. Culler et. Al., TinyOS boot camp presentation, Feb 2001 13

Example: Surge 14

Typical Applica9on D. Culler et. Al., TinyOS boot camp presentation, Feb 2001 application sensing application routing Routing Layer messaging Messaging Layer packet Radio Packet byte Radio Byte (MAC) photo Temp SW bit RFM clocks ADC i2c HW 15

Two- level Scheduling Ø Events handle interrupts Interrupts trigger lowest level events Events can signal events, call commands, or post tasks Ø Tasks perform deferred computanons Ø Interrupts preempt tasks and interrupts Preempt POST Tasks FIFO events commands commands Hardware Interrupts Time 16

Mul9ple Data Flows Ø Respond quickly: sequence of event/command through the component graph. Immediate execunon of funcnon calls e.g., get bit out of radio hw before it gets lost. Ø Post tasks for deferred computanons. e.g., encoding. Ø Events preempt tasks to handle new interrupts. 17

Sending a Message Timing diagram of event propagation (step 0-6 takes about 95 microseconds total) 18

Scheduling Ø Interrupts preempt tasks Respond quickly Event/command implemented as funcnon calls Ø Task cannot preempt tasks Reduce context switch à efficiency Single stack à low memory footprint TinyOS 2 supports pluggable task scheduler (default: FIFO). Ø Scheduler puts processor to sleep when no event/command is running task queue is empty 19

Space Breakdown Code size for ad hoc networking application Bytes 4000 3000 2000 1000 0 Interrupts Message Dispatch Initilization C-Runtime Scheduler: 144 Bytes code Light Sensor Clock Totals: 3430 Bytes code Scheduler 226 Bytes data Led Control Messaging Layer Packet Layer Radio Interface Routing Application Radio Byte Encoder D. Culler et. Al., TinyOS boot camp presentation, Feb 2001 20

Power Breakdown Active Idle Sleep CPU 5 ma 2 ma 5 μa Radio 7 ma (TX) 4.5 ma (RX) 5 μa EE-Prom 3 ma 0 0 LED s 4 ma 0 0 Photo Diode 200 μa 0 0 Panasonic CR2354 560 mah Temperature 200 μa 0 0 Lithium BaRery runs for 35 hours at peak load and years at minimum load! That s three orders of magnitude difference! A one byte transmission uses the same energy as approx 11000 cycles of computanon. 21

Time Breakdown Components Packet reception work breakdown CPU Utilization Energy (nj/bit) AM 0.05% 0.20% 0.33 Packet 1.12% 0.51% 7.58 Ratio handler 26.87% 12.16% 182.38 Radio decode thread 5.48% 2.48% 37.2 RFM 66.48% 30.08% 451.17 Radio Reception - - 1350 Idle - 54.75% - Total 100.00% 100.00% 2028.66 Ø 50 cycle task overhead (6 byte copies) Ø 10 cycle event overhead (1.25 byte copies) 22

Time Breakdown Components Packet reception work breakdown CPU Utilization Energy (nj/bit) AM 0.05% 0.20% 0.33 Packet 1.12% 0.51% 7.58 Ratio handler 26.87% 12.16% 182.38 Radio decode thread 5.48% 2.48% 37.2 RFM 66.48% 30.08% 451.17 Radio Reception - - 1350 Idle - 54.75% - Total 100.00% 100.00% 2028.66 Ø 50 cycle task overhead (6 byte copies) Ø 10 cycle event overhead (1.25 byte copies) 22

Time Breakdown Components Packet reception work breakdown CPU Utilization Energy (nj/bit) AM 0.05% 0.20% 0.33 Packet 1.12% 0.51% 7.58 Ratio handler 26.87% 12.16% 182.38 Radio decode thread 5.48% 2.48% 37.2 RFM 66.48% 30.08% 451.17 Radio Reception - - 1350 Idle - 54.75% - Total 100.00% 100.00% 2028.66 Ø 50 cycle task overhead (6 byte copies) Ø 10 cycle event overhead (1.25 byte copies) 22

Time Breakdown Components Packet reception work breakdown CPU Utilization Energy (nj/bit) AM 0.05% 0.20% 0.33 Packet 1.12% 0.51% 7.58 Ratio handler 26.87% 12.16% 182.38 Radio decode thread 5.48% 2.48% 37.2 RFM 66.48% 30.08% 451.17 Radio Reception - - 1350 Idle - 54.75% - Total 100.00% 100.00% 2028.66 Ø 50 cycle task overhead (6 byte copies) Ø 10 cycle event overhead (1.25 byte copies) 22

Time Breakdown Components Packet reception work breakdown CPU Utilization Energy (nj/bit) AM 0.05% 0.20% 0.33 Packet 1.12% 0.51% 7.58 Ratio handler 26.87% 12.16% 182.38 Radio decode thread 5.48% 2.48% 37.2 RFM 66.48% 30.08% 451.17 Radio Reception - - 1350 Idle - 54.75% - Total 100.00% 100.00% 2028.66 Ø 50 cycle task overhead (6 byte copies) Ø 10 cycle event overhead (1.25 byte copies) 22

Time Breakdown Components Packet reception work breakdown CPU Utilization Energy (nj/bit) AM 0.05% 0.20% 0.33 Packet 1.12% 0.51% 7.58 Ratio handler 26.87% 12.16% 182.38 Radio decode thread 5.48% 2.48% 37.2 RFM 66.48% 30.08% 451.17 Radio Reception - - 1350 Idle - 54.75% - Total 100.00% 100.00% 2028.66 Ø 50 cycle task overhead (6 byte copies) Ø 10 cycle event overhead (1.25 byte copies) 22

Time Breakdown Components Packet reception work breakdown CPU Utilization Energy (nj/bit) AM 0.05% 0.20% 0.33 Packet 1.12% 0.51% 7.58 Ratio handler 26.87% 12.16% 182.38 Radio decode thread 5.48% 2.48% 37.2 RFM 66.48% 30.08% 451.17 Radio Reception - - 1350 Idle - 54.75% - Total 100.00% 100.00% 2028.66 Ø 50 cycle task overhead (6 byte copies) Ø 10 cycle event overhead (1.25 byte copies) 22

Advantages Ø Small memory footprint Only needed components are complied/loaded Single stack for tasks Ø Power efficiency Put CPU to sleep whenever the task queue is empty TinyOS 2 provides power management for peripherals and microprocessors (ICEM). Ø Efficient modularity Event/command interfaces between components Event/command implemented as funcnon calls Ø Concurrency- intensive operanons Event/command + tasks 23

Issues Ø Lack preempnve real- Nme scheduling Urgent task may wait for non- urgent ones Ø Lack flexibility StaNc linking only Cannot change parts of the code dynamically Ø Unfamiliar APIs POSIX thread library in TinyOS 2.x mingates the problem Ø No protecnon barrier between applicanons and kernel 24

More Ø MulN- threaded vs. event- driven architectures Lack empirical comparison against exisnng OSes A standard OS is more likely to be adopted by industry Jury is snll out Ø AlternaNve: NaNve Java Virtual Machine Java programming Virtual machine provides protecnon Example: Sun SPOT 25

nesc Ø Programming language for TinyOS and applicanons Ø Support TinyOS components Ø Whole- program analysis at compile Nme Improve robustness: detect race condinons OpNmizaNon: funcnon inlining Ø StaNc language No funcnon pointer No malloc Call graph and variable access are known at compile Nme 26

Interfaces interface Clock { command error_t setrate(char interval, char scale); event error_t fire(); } interface Send { command error_t send(message_t *msg, uint16_t length); event error_t senddone(message_t *msg, error_t success); } interface ADC { command error_t getdata(); event error_t dataready(uint16_t data); } Ø Interfaces are bi- direcnonal Ø They include both commands and events Ø Java interfaces are a good analogy 24

Modules Ø Implement funcnonality - C- like syntax Ø Provide and use sets of interfaces provider of an interface implements its commands users of an interface implements its events Ø WARNING: modules!= objects each module has a single instance module TimerP { provides { interface StdControl; interface Timer; } uses interface Clock;... } 25

module SurgeP { provides interface StdControl; uses interface ADC; uses interface Timer; uses interface Send; } implementation { bool busy; norace uint16_t sensorreading; async event result_t Timer.fired() { bool localbusy; atomic { localbusy = busy; busy = TRUE; } } if (!localbusy) call ADC.getData(); return SUCCESS; } async event result_t ADC.dataReady(uint16_t data) { sensorreading = data; post senddata(); return SUCCESS; }... Module 29

Con0igurations Ø Select & wire together modules configuration TimerC { provides { interface StdControl; interface Timer; } } implementation { components TimerP, HWClock; StdControl = TimerP.StdControl; Timer = TimerP.Timer; } TimerP.Clock -> HWClock.Clock; 27

Example: Surge 31

Concurrency Ø Race condinon: concurrent interrupts/tasks update shared variables. Ø Ø Asynchronous code (AC): reachable from at least one interrupt handler. Synchronous code (SC): reachable from tasks only. Ø Any update of a shared variable from AC is a potennal race condinon. 32

A Race Condi9on module SurgeP {... } implementation { bool busy; norace uint16_t sensorreading; async event result_t Timer.fired() { if (!busy) { busy = TRUE; call ADC.getData(); } return SUCCESS; } task void senddata() { // send sensorreading adcpacket.data = sensorreading; call Send.send(&adcPacket, sizeof adcpacket.data); return SUCCESS; } async event result_t ADC.dataReady(uint16_t data) { sensorreading = data; post senddata(); return SUCCESS; } 33

Atomic Sec9ons atomic { <Statement list> } Ø Disable interrupt when atomic code is being executed Ø But cannot disable interrupt for long! No loop No command/event FuncNon calls OK, but callee must meet restricnons too 34

Prevent Race module SurgeP {... } implementation { bool busy; norace uint16_t sensorreading; disable interrupt enable interrupt async event result_t Timer.fired() { bool localbusy; atomic { localbusy = busy; test-and-set busy = TRUE; } if (!localbusy) call ADC.getData(); return SUCCESS; } 35

nesc Compiler Ø Race- free invariant: Any update to a shared variable is from synchronous context only, or occurs within an atomic secnon. Ø Compiler returns error if the invariant is violated. Ø Fix Make access to shared variables atomic. Move access to shared variables to tasks. 36

Results Ø Tested on full TinyOS code, plus applicanons 186 modules (121 modules, 65 configuranons) 20-69 modules/app, 35 average 17 tasks, 75 events on average (per applicanon) Lots of concurrency! Ø Found 156 races: 103 real! About 6 per 1000 lines of code Ø Fixing races: Add atomic secnons Post tasks (move code to task context) 37

Op9miza9on: Inlining App Code size Code Data CPU inlined noninlined reduction size reduction Surge 14794 16984 12% 1188 15% Maté 25040 27458 9% 1710 34% TinyDB 64910 71724 10% 2894 30% Inlining improves performance and reduces code size. Why? 38

Overhead for Func9on Calls Ø Caller: call a funcnon Push return address to stack Push parameters to stack Jump to funcnon Ø Callee: receive a call Pop parameters from stack Ø Callee: return Pop return address from stack Push return value to stack Jump back to caller Ø Caller: return Pop return value 39

Principles Revisited Ø Support TinyOS components Interface, modules, configuranon Ø Whole- program analysis and opnmizanon Improve robustness: detect race condinons OpNmizaNon: funcnon inlining More: memory footprint. Ø StaNc language No malloc, no funcnon pointers 40

Issues Ø Acceptance of a new programming language? Ø No dynamic memory allocanon Bound memory footprint Allow offline footprint analysis But how to size buffer when data size varies dynamically? Ø RestricNon: no long- running code in Command/event handlers Atomic secnons 41

Reading Ø Ø Ø Ø D. Gay, P. Levis, R. von Behren, M. Welsh, E. Brewer, and D. Culler, The nesc Language: A HolisNc Approach to Networked Embedded Systems. [Required] J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, and K. Pister, System Architecture DirecNons for Network Sensors. P. Levis and D. Gay, TinyOS Programming, Cambridge University Press, 2009. Purchase the book online Download the first half of the published version for free. hrp://www.nnyos.net/ 42